If you read
any of the news coverage of the recently released assessment of Gulf of Maine cod, you probably came away with three things.
- Fishermen are seriously questioning the new assessment (this was the focus of many mainstream media reports, not to mention a lot of political rhetoric).
- The 2011 assessment – which indicates that the stock has been overfished in recent years and stands no chance of reaching rebuilt status by 2014, as hoped – was a dramatic reversal from the previous (2008) assessment, which had suggested that Gulf of Maine cod was on the rebound. The discrepancy is one reason fishermen have greeted the report so skeptically.
- The revised assessment has put the fear of
Godsevere catch restrictions – if not this year, then in subsequent years – into New England fishermen. That’s another reason fishermen and politicians are subjecting the new numbers to such scrutiny.
What you probably didn’t get was a good explanation of why the two assessments were so different. Have cod stocks really evaporated in the past three years? Or was one of the assessments in error? And, if so, which one? A few articles referred to the fact that there were differences in the data and computer model used. But the details I crave were sorely lacking.
So I headed down to the other end of Woods Hole (it’s a grueling two-block walk) and spent a couple of hours with Liz Brooks and Mike Palmer, two of the scientists at the Northeast Fisheries Science Center who were involved in producing the 2011 Gulf of Maine cod stock assessment. Here’s what I learned.
A more nuanced assessment
There’s actually some good news to be found in the 2011 assessment. Both Brooks and Palmer are of the professional opinion that the actual status of the cod stock has not changed dramatically over the past three years, and that it definitely hasn’t crashed. In fact, they think it has been slowly rebuilding over the past several years – the latest three included.
Yep, you read that right. Gulf of Maine cod is rebuilding, recovering from decades (if not centuries) of overfishing. It’s been doing that steadily for several years. That’s the good news.
But, and this is a big BUT, it’s doing it a lot more slowly than the 2008 numbers suggested … so slowly that there’s no way Gulf of Maine cod will make it all the way to an official designation of “rebuilt” by 2014, a target date suggested by the federal legislation that mandates fisheries management. That’s the bad news, and the reason that fishermen are worried. Federal mandates to end overfishing as soon as it’s detected and rebuild fish stocks as quickly as possible could drive regulators to severely cut back cod fishing quotas. The idea of shutting down the Gulf of Maine cod fishery even surfaced.
And that, in turn, could bring the whole groundfish fishery – cod, haddock, pollock, flounder – to a grinding halt, because you can’t stick a big net down in the water and expect the cod to please swim aside. The fifteen species in the New England groundfish fishery are all regulated together because they all hang together – they prefer similar kinds of habitat, stay on or near the sea floor, and are all caught in the same trawl nets. If regulators severely limit the catch of one species, it can become what’s known as a choke species – a single species that impinges on the ability to fish for all the others. Cod could become the ultimate choke species.
But Brooks and Palmer stressed that regulators do have some leeway, as was demonstrated at last week’s meeting of the New England Fishery Management Council. The assessment itself makes no regulatory recommendations. It simply provides the numbers necessary to make informed management decisions (a refrain that should sound eerily familiar to climate scientists/science buffs).
What’s behind the change
Which brings me back to the assessment. The dramatic change in the stock’s estimated rate of growth is still that – a dramatic change. So what’s responsible?
Well, for one, scientists now have a better handle on how quickly cod grow to a size sufficient for reproduction. Previously, estimates of what’s known as weight-at-age were based entirely on the fish that fishermen brought to the fish pier. This time around, scientists also got to look at the fish that fishermen tossed back, their discards. That’s important because there are legal size limits, so the fish that get kept are the biggest ones – not just the oldest ones, but the larges ones at a range of ages. It’s like estimating the average height of 4-yr old kids based only on the 4-yr olds that were tall enough to get on an amusement park ride. When scientists looked at the discards, they figured out that the average 4- or 5-year old cod wasn’t as big as they’d thought. And that dropped their estimates of how many fish would be reproducing each year and contributing to the population’s growth.
The other factor that Brooks and Palmer point to is two survey trawls (that’s where fishery scientists go out, drop a net over the back of the boat, and then use the fish they catch for science rather than profit) – one in 2007 and one in 2008 – which both contained extremely high numbers of fish born in 2005, far more than any of the others. Those oddballs raised some eyebrows on the 2008 asessment team, but there was no clear reason to discard the data – no errors or unusual circumstances that would explain away the high numbers. In addition, the fact that the same pattern was seen two years in a row made it seem more possible that it was real, not just a fluke. So the team decided to include the data. And, to quote Palmer, “the model chased those highs.”
Think of it this way: you’re considering a purchase on Amazon.com, so you scroll down and check out the consumer reviews. If you see three people giving whatever it is you’re thinking of buying a 1-star rating and one lone individual giving it five stars, you’d probably conclude that the 1-star rating was the more deserved and the 5-star guy isn’t so perceptive (at least that’s what I would do). But the pure mathematical average would be a 2-star rating – twice the deserved.
The computer model in use in 2008 had no way to take into account the fact that those two trawls were very different from all the others, and therefore possibly suspect. Or, in science-speak, it couldn’t account for uncertainty in the data. So when the model predicted, say, the number of five-year-old fish that would be around in 2010 – fish that would be legal to catch, and also of an age to reproduce and help the population grow further – the result was what researchers now, with hindsight, know to be an overestimate.
Actually, to simply call it hindsight does a disservice to Palmer, Brooks, and the others working day in and day out to produce – and improve – fisheries assessments. Indeed, subsequent years’ trawl data failed to show similar peaks in the 2005 year class, making it clear to the human eye that those two trawls weren’t accurate reflections of reality. And, even with the old model, the influence of those two oddballs would have diminished as more data were added. But the 2011 assessment didn’t use the old model. It used a new one that can incorporate uncertainty and downplay the importance of such outliers. When Palmer steps back through previous years using the new model – what he calls a retrospective analysis – the spikes in 2007 and 2008 all but disappear. And when the new model projects into the future, the rapid growth predicted in 2008 also disappears.
For better or for worse
As Palmer and Brooks described the intricacies of the 2011 assessment, they exuded a combination of pride and defensiveness. Both feel that the changes made between 2008 and 2011 are significant improvements that they worked hard to bring about. And yet, they are now under attack, with fishermen and politicians calling their science into question and pushing for further improvements and faster turnaround times. Ironically, efforts to better account for the uncertainty in assessment data were, in part, prompted by previous confrontations like this. And Brooks says she can’t help but note that there’s no controversy over the pollock assessment, which was produced with the very same methodology but ended up showing that the stock was in better shape than expected.
But neither contends that the assessment process is perfect, as is. I asked what they would do with an unlimited budget to improve assessments. Brooks, a computer modeler, immediately said she’d hire more people to work on building and testing better models. Palmer, the self-professed data geek, said he would love to see an electronic data collection system put in place that would allow fish to be tracked from-sea-to-sale, so to speak. Right now, there’s data about where fishermen are fishing, there’s data about how much a given fisherman caught, and then there’s biological data (age, length, weight) collected by fishery scientists once the fish reach the auction house. But there’s no easy way to connect those different pools of data and ask questions like how many five-year-old fish were caught in a given area.
Both agreed on one point, though. The current push for faster assessments is likely to undermine any progress toward betterassessments. Whether it’s building new models or cultivating new data sources, making improvements requires time and effort. If their limited personnel and resources are spent “on the treadmill” of annual stock assessments for every species, there’s nobody and nothing left to put toward the improvements that Brooks and Palmer say would be in the true best interests of fishermen.