Archive for the ‘Statistics’ Category
In addition to his Global War on Terror, George W. Bush also started an initiative to end homelessness. According to his own 2004 “Record of Achievement” (after that they apparently stopped recording and perhaps started shredding):
- In 2003, the Bush Administration announced the largest amount of homeless assistance in history, $1.27 billion to fund 3,700 local housing and service programs around the country.
- President Bush has proposed the Samaritan Initiative, a new $70 million program to provide supportive services and housing for chronically homeless individuals.
- The Interagency Council on Homelessness has been revitalized, bringing together 20 Federal agencies to coordinate efforts to end chronic homelessness in 10 years.
Now, this is all fine and good. I am a big supporter of Housing First initiatives (placing the “chronically homeless” in permanent housing) because I really think they help the overall homeless situation. But, there are a couple problems with these kinds of large, centralized programs. The first is that the larger number of “temporary homeless” seem to be lost in the rush to fund housing first projects and second, the number of homeless becomes an important political signpost showing how well a local government is doing.
I don’t have much to say about the first point. I would prefer to point to the excellent article by Violet Law (what a great name!) at the National Housing Institute. Her piece looks at the trade offs between focusing primarily on housing first and taking a slightly more balanced approach.
According to the U.S. Department of Housing and Urban Development (HUD), a “chronically homeless person” is an individual who has been without a home for at least one year and is diagnosed with mental illness or drug or alcohol addiction. Housing First focuses on serving this segment of the homeless population.
While the cities that have adopted Housing First have reported a reduction in their chronic homeless population by the hundreds or even thousands in the last decade, homeless advocates are increasingly alarmed that this solution, executed with little increase in federal funding, is threatening to short-change other homeless populations, such as families with children and teenagers who have aged out of foster care, in favor of one narrowly defined group. “We wish [the Bush administration] had picked up the whole agenda of ending homelessness for all,” says Nan Roman, president and chief executive officer of the National Alliance to End Homelessness (NAEH).
While the benefits of permanent housing programs are manifest, some advocates for the homeless are increasingly speaking out against the Bush administration’s position that Housing First is the panacea for ending homelessness-especially now that ICH and the administration are seeking to reauthorize the McKinney-Vento Act, which was, in 1986, the first piece of federal legislation to address homelessness. The administration’s draft version of the reauthorized legislation calls for making permanent the Samaritan bonus-the current incentive to provide permanent housing for the chronically homeless. Those who oppose this incentive charge that the singular focus on the chronically homeless population is at best a misguided effort to solve the complexities of homelessness by defining it too narrowly and simplistically. Some opponents of the administration’s proposed reauthorization bill, mostly from the National Coalition to End Homelessness, support competing legislation introduced in Congress in February, the Homeless Emergency Assistance and Rapid Transition to Housing (HEARTH) Act, which they say would allocate homeless assistance funding in a more balanced manner.
Now. I am not exactly a homelessness activist, I do try to keep my self informed. I am also a self-admitted statistics geek. Therefore imagine my surprise when Carl Bialik, the WSJ’s Numbers Guy, combined both in a post discussing the brouhaha in New York City over this years homelessness count. NYC pegged the number of homeless at 3,755 dropping from 3,843 in 2006 and 4,395 in 2005. So, things appear to be looking up.
Not so fast Bat Man! Apparently one of the researchers involved in the count resigned because he felt they were undercounting.
Once each winter, the New York City government sends thousands of volunteers into the streets and subways to count the number of people who are homeless. The goal is to get a sense of how well the city is doing at alleviating the most severe kind of homelessness, which could be deadly on a frigid night.
This year, the January count produced an estimate of 3,755 unsheltered homeless people. (The city’s Department of Homeless Services trumpeted the findings in a press release, reporting the count was down 15% from two years earlier.)
But Julien Teitler, an associate professor of social work and sociology at Columbia University who was hired by the city to assist in its count, disputed the city’s total. Prof. Teitler recently told the New York Times that city officials were “arbitrarily adjusting” figures in a way that would produce a lower count.
Bialik based his report partially on the information from a New York Times article highlighting the problems with the study. The dispute is over the method involving decoys to test whether volunteers are correctly counting; a quality control check if you will.
Under Dr. Hopper’s direction, Columbia recruited dozens of “decoys” to go to the same areas and stations as the volunteers. The decoys posed as homeless people.
The volunteers were instructed to ask people who were lingering on the street, in parks or in the subways or if they had a place to spend the night — unless the people were asleep, in which case they were not to be disturbed.
Decoys, if questioned by the volunteers, were instructed to identify themselves and to give the volunteers stickers to record their locations. Otherwise, the decoys were instructed merely to keep track of whether they saw the volunteers pass by.
By keeping track of the number of decoys in a given area and comparing that to the number of decoys actually found, one can estimate how many homeless were missed in a given area. The problem stems from how one actually counts the decoys.
Unless some stupid statistics professor shows up and claims that you need to adjust the numbers up. Dr. Teitler has discontinued his involvement over the process because he feels the decoys aren’t being used correctly. His method would increase the current value to 4,039 homeless.
So this is just about 250 people right? Not exactly.
For me the real meat of the NYT article wasn’t about the statistics, but the politics.
New York City is three years into a five-year “action plan” announced by Mayor Michael R. Bloomberg to end chronic homelessness and reduce the street population by two-thirds, all by April 2009. The results so far are mixed. The number of homeless adults in city shelters has fallen noticeably since 2004, but the number of homeless families is at a record high.
So it came as welcome news on May 2 when the city announced that the third annual Homeless Outreach Population Estimate had shown a slight decrease in unsheltered homeless people.
The city’s Department of Homeless Services said the estimate “shows the city is on track” to meeting the goal of reducing unsheltered homelessness by two-thirds.
Oh! So if you are on track and you are managing the Department of Homeless Services you are doing a good job. You might even get a promotion some day. No reason to want lower number right?So there is no reason to worry. The government has everything under control.
Well. Everything but the numbers.
Yesterday, Michael van der Galiën wrote a comment about an editorial which appeared in the WSJ. Even though I can’t read the entire editorial because I don’t have a subscription (recent confessions aside, the Numbers Guy is free), I’d like to point out something.The WSJ editorial starts with
It’s been a rough week for John Edwards, and now comes more bad news for his “two Americas” campaign theme. A new study by the Congressional Budget Office says the poor have been getting less poor. On average, CBO found that low-wage households with children had incomes after inflation that were more than one-third higher in 2005 than in 1991.
That sounds cheery. Michael echos Jonathan Chait from the New Republic in pointing out the disingenuousness of the editorial. While income did increase between 1991 and 2005, it seems to have peaked in 2000. I wonder what happened in 2000? Oh yeah. Katherine Harris elected George W. Bush.
But looking at the study and perhaps casting a sidelong glance at today’s New York Times might point out a few more interesting tid-bits.
Let’t start the CRS study. This is the graph shown on the very first page.
While I might be a bit myopic, The total government financial aid dropped between 1991 and 2006, it seems to have remained fairly constant since George W. took office. I wonder why the Republicans don’t seem to want to lower that number since the poor are obviously earning so much? Don’t they want their tax dollars spent wisely? Aren’t the Democrats the ones who spend and run? What about the welfare queens in pink caddies?!
But wait. It gets better. Now let’s compare this with the graph on page 11. This chart shows increase in real income of households with children.
While the report is titled Changes in the Economic Resources of Low-Income Households with Children, [my emphasis] this chart includes all economic income brackets from the lowest to the highest. (Note: I edited the graph to include the the income levels.)
So while lower class income does seem to have improved, the over-pressured, much maligned, highest-income bracket is doing much better. Thank goodness! I was starting to think the Republicans hadn’t achieved anything.
And remember, the scale on that graphic is percent. That means not only are the top 20% doing better in an absolute sense, they are doing better in a relative sense. That means not only were the best of the best earning much more to begin with, they get even more income now. Cool huh? Bush II is my hero!
And who might these poor, deprived super-rich be?
Eduardo Porter gives us a glimpse in today’s New York Times.
As executive pay has surged in most American companies, attention has focused on the growing gap between the earnings of top executives and the average wage of workers in cubicles or on the shop floor. Little noticed, though, is how much the gap has also widened between the summit and the next few echelons down.
Few are deprived in corporate suites, of course. But the widening disparities in business, which show up in a variety of other ways, reflect a dynamic that is taking hold across the economy: the growing concentration of wealth and income among a select group at the pinnacle of success, leaving many others with similar talents and experience well behind.
In the 1960s and ’70s, chief executives running the nation’s biggest companies earned 80 percent more, on average, than the third-highest-paid executives, according to a recent study by Carola Frydman of the Massachusetts Institute of Technology and Raven E. Saks at the Federal Reserve. By the early part of this decade, the gap in the executive suite between No. 1 and No. 3 [poor little guy] had swollen to 260 percent. [my emphasis]
Perhaps we need to change from calling these people neo-conservatives to calling them “neo-feudalists:” people creating a super rich elite able to change justice to keep them in power and suit their lifestyles .
While even those one or two titles – um – ranks down are left to deal with the unruly serfdom, the CEOs (Barons?) and Presidents (Kings?) can truely say – the economic outlook of the serfs has never been better; much better now, than under that pesky democracy thing.
a poor an economically challenged liberal, I have a dirty secret. I sneak over to the WSJ about once a day.
Well, the secret isn’t that dirty: I don’t read the editorials (ick, ick, ick). Nope. I’m a fan of Carl Bialik, The Numbers Guy.
I’m more or less against fact-based science discussions. Especially when statistics are used by people who haven’t looked at the work. But Bailik has a great way of making numbers seem accessable. His discussion of the meta-analysis on the disparaged drug Avandia is a case in point.
The big news yesterday that the diabetes drug Avandia may pose cardiac risks was based on something called a meta-analysis. It’s a type of research that has some significant drawbacks, but also some unique advantages.
In a meta-analysis, researchers pool results from different studies — in this case, Cleveland Clinic cardiologist Steven Nissen and statistician Kathy Wolski analyzed 42 studies. Those studies were done by many different people, and as you might expect, there was wide variation between them. Sometimes Avandia was compared with a placebo and sometimes with alternate treatments. Adverse events — namely heart attacks shown to occur with higher frequency among Avandia users — may not have been identified consistently across the different trials. And if they weren’t, Dr. Nissen would have no way to know, because he was looking at study summaries and not patient-level data. The limitations of this “study of studies” filled a lengthy third paragraph in an accompanying New England Journal of Medicine editorial.
So why, then, use meta-analysis at all? Because for drug dangers that are rare enough, even studies of thousands of patients might not suffice to separate a real risk from random statistical variation. Combining tens of thousands of patients who underwent the treatment separately, under different protocols and supervision, may be the only way to clear thresholds for statistical significance.
He goes on to clearly describe the strengths and weaknesses of the technique; explaining the importance of the variable currently called p; when meta-analysis are useful and to explain why both sides tend to fight over the issue of whether a meta-analysis is valid.
I love statistics. (Actually, since I haven’t discussed this face to face with statistics, I should probably call it a crush, but you get the idea.)
As an example, most people, when confronted with a statistics example involving doctors, cancer patients and risk would probably change the channel. Me – I buy the book! From Joel Best’s More Damn Lies and Statistics (the sequel to Damn Lies and Statistics),
Consider the following word problem about women receiving mammograms to screen for breast cancer (the statements are, by the way, roughly accurate in regard to women in their forties who have no other symptoms):
The probability that [a woman] has breast cancer is 0.08 percent. If a woman has breast cancer, the probability is 90 percent that she will have a positive mammogram. If a woman does not have breast cancer, the probability is 7 percent that she will still have a positive mammogram. Imagine a woman who has a positive mammogram. What is the probability that she actually has breast cancer.
Confused? Don’t be ashamed. When this problem was posed to twenty-four physicians, exactly two managed to come up with the right answer. Most were wildly off: one-third answered that there was a 90 percent probability that a positive mammogram denoted actual breast cancer; and another third gave figures of 50 to 80 percent. The correct answer is about 9 percent.
Let’s look carefully at the problem. Not that breast cancer is actually rather rare (0.8 percent); that is, for every 1,000 women, 8 will have breast cancer. There is a 90 percent probability that those women will receive positive mammograms – say, 7 of the 8. That leaves 992 women who do not have breast cancer. Of this group 7 percent will also receive positive mammograms – about 69 cases of what are called false positives. Thus a total of 76 (7+69=76) women will receive positive mammograms, yet only 7 of those – about 9 percent – will actually have breast cancer. The point is that measuring risk often requires a string of calculations. Even trained professionals (such as doctors) are not used to calculating and find it easy to make mistakes. [my emphasis]
That is why fact-based science discussions fail. Not because the facts are wrong, but because any discussion of the issue won’t fit into a 30 second interview and boil down to a 25 word text snippet.
This is where framing science needs to be used. You need to be able to tell a story about how science works, how scientific uncertainty works without getting people nervous. Perhaps the fundamental difference between a scientist and a non-scientist is that the latter sees danger in uncertainty, the former sees an opportunity to write a grant proposal.
To be able to frame science, you need ideas, examples, and good stories. Like the Avandia study discussed by the Numbers Guy or some of the topics on the very entertaining Freakonomics blog by Steven Levitt and Stephen Dubner.
But sometimes – I just love the idea for itself. Statistics about statistics. Because that is just sooo totally meta.
Today I’d like to tell two very sad tales, stories about suicide. One paints a very sad picture, the other, only half of one.The first story is about a depressed teenager.
In 1997, Matt Miller, a 13 year old started having behavioural problems; his grades dropped, he was banging his head against his locker at school, he began urinating on the bathroom floor. His parents, alerted to the problem by school officials, took him to an adolescent psychiatrist who diagnosed an unspecified depression. Since the boy did not show improvement after three weeks, the psychiatrist prescribed the anti-depressant Zoloft, a so called selective serotonin reuptake inhibitor (SSRI). A week later the young man committed suicide by hanging himself.
The parents suspected the medication played an important role in their son’s death and sued the maker of the antidepressant – pharma giant Pfizer. They enrolled the help of an expert witness, Dr. David Healy. Healy had studied the effects of SSRIs on individuals not suffering from depression and reported that a few had reacted with obsessive suicidal thoughts. Pfizer’s counsel argued that Healy’s testimony not be admitted because it did not meet the so called Daubert standards requiring judges to act as gatekeepers in the case of expert testimony and requiring evidence to have won “widespread acceptance” in professional circles. (This is the same standard defendants in the Kitzmiller vs. Dover case attempted to use in order to prevent Barbara Forrest from testifying. They failed and her testimony later proved damning to the Intelligent Design case.)
The second Miller story is not about someone who committed suicide, but someone studying it. Dr. Matthew Miller is the Associate Director of Harvard Injury Control Research Center and does research into methods for preventing suicide.
In a study appearing in the April issue of The Journal of Trauma, Miller is presenting his research into the correlation between the presence of firearms in households and suicide rates.
In the first nationally representative study to examine the relationship between survey measures of household firearm ownership and state level rates of suicide in the U.S., researchers at the Harvard School of Public Health (HSPH) found that suicide rates among children, women and men of all ages are higher in states where more households have guns. The study appears in the April 2007 issue of The Journal of Trauma.
“We found that where there are more guns, there are more suicides,” said Matthew Miller, Assistant Professor of Health Policy and Management at HSPH and lead author of the study.
Suicide ranks as one of the 15 leading causes of death in the U.S.; among persons less than 45 years old, it is one of the top three causes of death. In 2004, more than half of the 32,439 Americans who committed suicide used a firearm.
It should also be noted that there are more suicides in America per year than murders. It is clear that this study will be used by gun control lobbies to argue for more restrictions and attacked by firearm lobbies for being flawed.
While I am highly sceptical of handgun ownership, my alarm bells started ringing while reading the article describing the study. I got more suspicious when I read the summary,
The researchers recommend that firearm owners take steps to make their homes safer. “Removing all firearms from one’s home is one of the most effective and straightforward steps that household decision-makers can take to reduce the risk of suicide,” says Miller. “Removing firearms may be especially effective in reducing the risk of suicide among adolescents and other potentially impulsive members of their home. Short of removing all firearms, the next best thing is to make sure that all guns in homes are very securely locked up and stored separately from secured ammunition. In a nation where more than half of all suicides are gun suicides and where more than one in three homes have firearms, one cannot talk about suicide without talking about guns,” he adds.
Laudable sentiments all. But they only tell half the story.
You see, worldwide, America stands head and shoulders above the rest of the world with respect to access to firearms. There are many studies showing a strong correlation between the number of suicides, homicides and accidents using firearms. Unfortunately these studies usually don’t tell everything.
Let’s compare the data between Germany and the US. Germany requires firearms to be registered and gun owners to be licensed, both practices are handled in a patchwork fashion in the US. With only 8.9 percent of the households having firearms, Germany had a rate of 1.44 unintentional deaths by firearm per 100,000 residents (0.21 murders and 1.23 suicides) . During a similar reporting period, the US boasted a whopping 41 percent coverage of firearm availability with 13.47 firearm related deaths per 100,000 (6.24 murders, 7.23 suicides). This looks damning.
I would agree that data does point to a correlation between firearm availability and a direct increase in homicides. I think that is the paradox of the NRA argument of keeping weapons to defend oneself.
But if one concentrates on suicides, the picture changes. Let’s look at the overall suicide rate for the two countries. The US has a lower overall suicide rate than Germany (21.7 to 27.4 per 100,000).
Thus it would seem that any strong correlation between firearm ownership and suicide rates isn’t valid. What is valid is that if firearms are available, they will be used as the preferred method; but there are many, many ways to kill yourself.
So, even though I truly believe Dr. Miller’s heart is in the right place, I don’t trust his research. And any attorney attempting to use it in court will probably fail against an analysis similar to mine. Which brings me back to the first story.
Having research that only shows one side of an issue is one of the things that led to the creation of the Daubert standards. In the case of the suicide of Matt Miller the judge asked for help. According to the excellent Nation article about this,
To help evaluate Healy’s research, US District Court Judge Kathryn Vratil appointed two independent experts, Yale epidemiologist John Concato and University of Illinois psychiatrist John Davis, to answer her questions. “I had envisioned a freewheeling scientist-to-scientist dialogue,” says Vickery, the Millers’ attorney. Vratil, an appointee of the first President Bush, had other ideas: To avoid any appearance of bias, she barred the experts from talking with Healy or any other witness as they prepared their findings.
In their report, the two men called Healy an “accomplished investigator.” But they also said Healy’s methodology “has not been accepted in the relevant scientific community” and that the psychopharmacologist holds a “minority view” about SSRIs and suicidality. Agencies like the Food and Drug Administration (FDA), they noted, had found no such relationship.
In February 2002, Judge Vratil issued her key rulings in Miller v. Pfizer. “Dr. Healy is an accomplished researcher,” she wrote, “and his credentials are not in dispute.” But his belief in the SSRI-suicide link is a “distinctly minority view,” she added, and the flaws in his methodology “are glaring, overwhelming, and unexplained.” With that, Vratil rejected Healy as an expert witness–and dismissed the lawsuit against Pfizer. The Millers appealed all the way to the Supreme Court, which in October 2004 rejected their petition for a hearing.
It would seem that the minority opinion lost the day, a single researcher reading too much into the data. It would seem that Dr. Healy is analogous to Dr. Miller. Both had valid claims but were overreaching.
Dr. Miller correctly points out that the number of suicides using firearms is directly correlated to the number of firearms available. That does not lead however to the result that lowering the number of firearms will directly lower the number of suicides. If that were true, Germany should have a much lower rate of suicide than America indeed one would expect a dramatic drop. We don’t see that.
Dr. Healy looked at the data and worried about people being severely damaged by the very treatment meant to save them. Other researchers argued he was wrong. Perhaps the saddest factor in this story is that Dr. Healy was likely right. Returning to the Nation article
In April 2006 the drugmaker GlaxoSmithKline disclosed that adults with major depression were almost seven times more likely to attempt suicide after taking the SSRI Paxil than after taking a placebo, although these events were rare. In November an FDA analysis of 372 clinical trials, involving almost 100,000 patients, revealed a twofold risk of suicidal behavior for adults under 25 who took SSRIs. To those who share David Healy’s views, the latest research is an affirmation–too late for the Millers but perhaps early enough to avert future tragedies. “I believe it vindicates Healy in a major way,” says Antonuccio, the Nevada professor. “Here mainstream scientists are saying, Yes, these antidepressants cause suicidality–which is what Healy has been saying all along.”
So perhaps there is a more important moral here.
Sometimes, it doesn’t matter whether the science is right or wrong. Sometimes it might be better to err on the side of safety – licensing and regulating guns on the one hand and strictly controlling the use of SSRIs on the other.
But for many this kind of pragmatic solution comes too late and at a much too high a price; the high price of legal fees, lobbyists – and lives.
Oh! For shame Wonkette, For shame.
According to this analysis, posted at Shakespeare’s Sister, carried out by ‘conservatives,’ Wonkette only managed to land third in the list of most potty mouthed leftie blogs.
In what has to be one of the most pathetic wastes of time and energy I’ve ever seen, some dude has calculated that the top 18 progressive blogs use Carlin’s “seven dirty words” (shit, piss, fuck, cunt, cocksucker, motherfucker, and tits, for the uninitiated) way more than the top 22conservative blogs. Evidently, the ratio is 18:1. Scandalous!
Where is your pride? (And yes, I know – if they had just included your blog sibling Fleshbot you would have rocked!)
So Wonkette! Get up and get started. More DraculaCunt stories. More Cheney quotes! You can still catch up.
After my global warming rant, I thought I’d give a brief heads up to this weeks Time cover story about risk. Although it only very briefly mentions global warming, the article does an excellent job of explaining WHY we don’t respond to explanations of risk.
We pride ourselves on being the only species that understands the concept of risk, yet we have a confounding habit of worrying about mere possibilities while ignoring probabilities, building barricades against perceived dangers while leaving ourselves exposed to real ones. Six Muslims traveling from a religious conference were thrown off a plane last week in Minneapolis, Minn., even as unscreened cargo continues to stream into ports on both coasts. Shoppers still look askance at a bag of spinach for fear of E. coli bacteria while filling their carts with fat-sodden French fries and salt-crusted nachos. We put filters on faucets, install air ionizers in our homes and lather ourselves with antibacterial soap. “We used to measure contaminants down to the parts per million,” says Dan McGinn, a former Capitol Hill staff member and now a private risk consultant. “Now it’s parts per billion.”
At the same time, 20% of all adults still smoke; nearly 20% of drivers and more than 30% of backseat passengers don’t use seat belts; two-thirds of us are overweight or obese. We dash across the street against the light and build our homes in hurricane-prone areas–and when they’re demolished by a storm, we rebuild in the same spot. Sensible calculation of real-world risks is a multidimensional math problem that sometimes seems entirely beyond us. And while it may be true that it’s something we’ll never do exceptionally well, it’s almost certainly something we can learn to do better.
The problem with habituation is that it can also lead us to go to the other extreme, worrying not too much but too little. Sept. 11 and Hurricane Katrina brought calls to build impregnable walls against such tragedies ever occurring again. But despite the vows, both New Orleans and the nation’s security apparatus remain dangerously leaky. “People call these crises wake-up calls,” says Dr. Irwin Redlener, associate dean of the Mailman School of Public Health at Columbia University and director of the National Center for Disaster Preparedness. “But they’re more like snooze alarms. We get agitated for a while, and then we don’t follow through.”
If you haven’t spent any time reading about these kinds of issues, it is worth taking the time to work through the article. Most of us are hard wired to react to stress and risks in certain ways. That’s probably why I have always driven slowly and spend more time worrying about the realistic threat of global warming than I do thinking about an avian flu pandemic or the consequences of a terrorist attack. Just me and my stupid serotonin levels, thankyouverymuch.
There was one part of the article that caused me to almost hurt myself snorting at.
The government must also play a role in this, finding ways to frame warnings so that people understand them. John Graham, formerly the administrator of the federal Office of Information and Regulatory Affairs, says risk analysts suffer no end of headaches trying to get Americans to understand that while nuclear power plants do pose dangers, the more imminent peril to both people and the planet comes from the toxins produced by coal-fired plants. Similarly, pollutants in fish can be dangerous, but for most people–with the possible exception of small children and women of childbearing age–the cardiac benefits of fish easily outweigh the risks. “If you can get people to compare,” he says, “then you’re in a situation where you can get them to make reasoned choices.”
The government? Like the president, the vice president, the head of the FDA and the EPA and all those politically motivated individuals? All those who are absolutely opposed to any support from industry. Industry, who just might have a slight interest in seeing the realistic risks of current policy slightly – adjusted? Oh! I feel better now. Thanks.
Somehow putting risk assessment in the hands of the current (and probably future) administration seems, well, risky to put it best. Maybe we should just go by a couple of bags of chips, a carton of cigarettes, and a couple of six packs; hop in the ol’ Ford Pinto and drive up north. Then we can do our part to help the environment, we could feed the polar bears – with ourselves. It’s not risky, it’s a sure thing death sentence.
Better than trusting the government.