About that Strad vs modern violin study thingy…
My colleague Frank Almond did a very thorough take-down of the whole thing here:
“These instruments were loaned with the stipulation that they remain in the condition in which we received them (precluding any tonal adjustments or even changing the strings), and that their identities remain confidential. All strings appeared to be in good condition.”
There are countless factors that can shape perceptions while comparing violins (even in a double-blind study), and this was the first genuine red flag for me. The setup of a fine violin is critical and highly subjective-strings, placement of the bridge, etc. The soundpost can move a fraction of an inch and completely change the way a violin responds and sounds, and this is particularly true for notoriously finicky Strads. It’s not uncommon for a spectacular instrument to seem inferior just because of an unusual setup or old strings (or getting bumped around on an airplane). It is unclear who set up the newer instruments (or when). I enjoyed the confident visual assessment of the strings- for the record, I regularly play on a 1715 Stradivari and use Vision Solo strings. They completely burn out after about a month and the instrument sounds totally different, but they look fine…
(snip snip)
I could go on, but some of you may get the idea by now. The conclusions of the study seem predetermined to a degree: statistically, most of the participants couldn’t tell new from old, and everyone’s perceptions were somewhat altered by the knowledge that some of the instruments were Strads. Is this a revelation, given that actual humans were involved, under conditions that heavily favored the newer instruments? I can guarantee that the results would be completely different if the double-blind study had a third step in a concert hall (or two), with a luthier on hand and some of the participants listening to the instruments as well as playing them.
With all due respect, and despite the rigorous science and controls applied, my sense is that the researchers started with a premise and set out to prove it.
I’d put it a little differently. I think the researchers did demonstrate something, and not because they set out with a predetermined opinion. But it wasn’t worth demonstrating. Certainly it would have been possible to design the same controls into a test that actually did say something meaningful about the comparative performance of the instruments under the same conditions. But the researchers likely didn’t understand the difference between that and what they did actually prove.
My father spent his entire career designing experiments and analyzing the resulting data. One of his favorite axioms was “if it’s not worth doing, it’s not worth doing well” (which is not quite the version of that axiom you grew up hearing, of course). Aside from its other flaws (can you say “anecdotal data”?) this study was a case in point.
Almond’s assessment is directly in line with what I’ve read from one of the playing participants. The “speed-date” rule, in particular, renders the whole thing meaningless; one minute on violin #1, followed by one minute on violin #2, with no going back allowed. Nobody can form an accurate impression of the qualities of *any* instrument under those conditions. And I agree that the statement that “the strings all looked just hunky-dory” is utterly laughable.