Quantcast
Channel: georgedvorsky
Viewing all 945 articles
Browse latest View live

Astronomers say there are at least 17 billion Earth-sized planets in the Milky Way

$
0
0

Astronomers say there are at least 17 billion Earth-sized planets in the Milky Way Back in June of last year, astronomers working with the Kepler space telescope made the bold proclamation that all stars have planets. While this is all fine-and-well (if not intuitively obvious), what was not known is how many Earth-sized planets are out there. But now, a new analysis of Kepler data may be providing the answer — and it's potentially huge. According to research just presented at the 221st meeting of the American Astronomical Society in California, the Milky Way hosts no less than 17 billion planets roughly the size of earth.

The study, which was conducted by Francois Fressin of the Harvard-Smithsonian Center for Astrophysics (CfA), indicates that about 17 percent of stars have an Earth-sized planet in an orbit closer than Mercury. That's about one in every six star systems. Given that the Milky Way has about 100 billion stars, that adds up to the figure of 17 billion.

Not Earth-Like, and Not Necessarily Habitable

Astronomers say there are at least 17 billion Earth-sized planets in the Milky Way Now, it's important to note that this number only describes those planets which are in close proximity to their parent stars — a distance that places them outside the solar system's habitable zone (orbits that are about 85 days or less).

The reason for the distinction is that it is currently very difficult to detect small planets further out because of the limitation of current telescopic techniques (namely the transit method, which detects minute dips in a star's light when a planet passes in front of it).

At the same time, however, the number is revealing. If there are 17 billion Earth-sized planets in Mercury-like orbits, it's probably safe to assume that there's a substantial number residing further out in the habitable areas. This is an unsubstantiated assumption, of course, but at least there's more data available now to make such conjectures.

To conduct their investigation, the astronomers surveyed about 2,400 candidate planets spotted by the Kepler satellite over the first 16 months of its operation.

Fressin's figures had to take into account an obvious selectional effect: the only planets that can be detected are the ones that pass along the same plane as the Earth. This required the astronomers to do some extrapolating.

Looking at the Numbers

The researchers found that nearly 50 percent of stars have a planet that's about the size of the Earth or larger in a close orbit. When including larger planets, like close-proximity super Jupiters, that number creeps up to 70 percent. That said, the astronomers believe that all stars have planets on account of data coming in from other observations and detection techniques.

Fressin's team categorized the planets into five types.

As already noted, 17 percent of stars have Earth-sized planets (which fall within 0.8 to 1.25 times the size of Earth). About 25 percent of stars feature a super-Earth (planets 1.25 to 2 times the size of Earth, and orbit in 150 days or less). Similarly, a quarter of stars have mini-Neptunes (2 to 4 times the size of Earth, in orbits 250 days long). Only 3 percent of stars have large Neptunes (4 to 6 times Earth), and 5% have Jupiter-like gas giants (6 to 22 times Earth, in orbits of 400 days or less).

Interestingly, the researchers also discovered that, except for gas giants, all types of planets are proportionately represented around different types of stars, whether they be red dwarfs or main sequence stars like our own.

Moving forward, a challenge for the astronomers will be to detect Earth-sized and Earth-like planets that sit farther out. But because they orbit less frequently, they are less susceptible to the transit method of detection. It's a problem that will likely be solved by due diligence — and extreme amounts of patience.

And in addition to this news, Fressin's team also announced 461 new planet candidates — bringing Kepler's total number of planets discovered to 2,740.

The study has been accepted for publication in The Astrophysical Journal.

Source: Harvard-Smithsonian Center for Astrophysics.

All images via Harvard-Smithsonian Center for Astrophysics.


New video of Machine Perception Lab's creepy robot baby shows that it is indeed creepy

$
0
0

We've known about Machine Perception Lab's super-realistic robot baby for quite some time now, but we've never actually seen it in action — at least until now. As this new video shows, roboticists appear to be getting perilously close to crossing the uncanny valley — while scaring the crap out of the good children of Earth at the same time.

Named Diego-san, it's an exaggerated representation of a one-year old child that stands 4 feet 3 inches tall (130 cm) and weighs 66 pounds (30 kg). Its body consists of 44 pneumatic joints and its head contains about 27 moving parts. And as the video shows, its facial expressions are unbelievably life-like.

New video of Machine Perception Lab's creepy robot baby shows that it is indeed creepy In order to simulate the reactions of a real child, Diego-san has had high definition cameras implanted in its eyes, allowing it to see people, gestures, and expressions. Then, through the use of an AI modeled on human babies, it can "learn" from people in the same way that real baby does. The researchers say it's a step forward in the development of "emotionally relevant robotics."

The face was developed by David Hanson at Hanson Robotics, and the body designed by Kokoro Co. Ltd. Part of the funding for Diego-san came from the National Science Foundation which is working to spur the development of cognitive AI and human-robot interaction research.

Image: MPL. H/t Gizmag.

Is Stephen Hawking 'more machine now than man'?

$
0
0

Is Stephen Hawking 'more machine now than man'? There's a great piece over at Wired by anthropologist Hélène Mialet about Stephen Hawking and the various ways he's learned to adapt to — and even transcend — his severe physical limitations. While it would be tempting to merely discuss his infrared activated voice synthesizer and robotic wheelchair, Mialet points out that there's more to Hawking than meets the eye — that he's not so much a person any more as he's the central node of the 'Hawking collective' — a diverse group of individuals who enable him to move beyond his disability in a profound way.

And Mialet would know. As an anthropologist interested in science and technology, she recently conducted an in-depth ethnographic study of Hawking. "He essentially became my 'tribe,'" she writes."For years, I followed him as he worked, resolved problems, produced theories, gave talks and participated in interviews and documentaries. I interviewed all the people around him: his nurses, personal assistants, students, colleagues and even the journalists. I lived and breathed the Hawking tribe."

And what she discovered was that, to understand Hawking, one has to understand the people and machines that surround him — what amplifies his competencies. She writes:

A "yes" answer to the question "Do you want to go to this conference?" will allow Hawking to travel from one end of the earth to the other – without having done anything more than twitch an eyebrow. His artificial voice offers another instrument of thought: What is well conceived is well said, and this is more true in Hawking's case. Since he doesn't speak, his disability forces him to be even more clear in his mind and less worried about all the work those utterances entail.

At the same time, this voice effaces – and makes us forget – the role of the machine insofar as it speaks for, comes from, and marks the presence of a public persona. This is despite the fact that every utterance is written in advance, either by Hawking or someone in his embodied network. In the same way his students perform the calculations upon which his "speeches" (and articles) will be based.

How is this different than other stars – or even the president – surrounded by an entourage responsible for meeting their needs and marketing their image?

Both Hawking and celebrities hold authority from their positions at the top of the hierarchy, while the bottom of that hierarchy makes it possible for these stars to enact and maintain their positions at the top. But in Hawking's case, the network is much more – almost completely – distributed and intimately embodied. Hawking isn't just issuing remote commands and expressed desires, his entire body and even his entire identity have become the property of a collective human-machine network. He is what I call a distributed centered-subject: a brain in a vat, living through the world outside the vat.

Traditionally, assistants execute what the head directs or has thought of beforehand. But Hawking's assistants – human and machine – complete his thoughts through their work; they classify, attribute meaning, translate, perform. Hawking's example thus helps us rethink the dichotomy between humans and machines.

Be sure to check out Mialet's entire article.

Image: Ted S. Warren/Associated Press.

A disturbing glimpse of Australia's wildfires as seen from space

$
0
0

A disturbing glimpse of Australia's wildfires as seen from space International Space Station Commander Chris Hadfield has snapped a series of images that capture the scale and devastation of wildfires that are currently blazing through parts of Australia. The bushfires, which have been burning since Friday, are being fueled by record high temperatures. And in fact, Australian meteorologists have added an extra color to its heat map to account for unprecedented temperatures reaching upwards of 129 degrees Fahrenheit (54 degrees Celcius).

The fires, which are sweeping through Tasmania and New South Wales, are consuming thousands of acres of forests and farms, while destroying a number of homes. Incredibly, there have been no reports of deaths, though 100 people remain unaccounted for in Tasmania.

A disturbing glimpse of Australia's wildfires as seen from space

A disturbing glimpse of Australia's wildfires as seen from space

A disturbing glimpse of Australia's wildfires as seen from space

Space images Chris Hadfield/NASA, heat mat via Australian Bureau of Meteorology.

Finally, an explanation for why our fingers and toes get all pruny when they're wet

$
0
0

Finally, an explanation for why our fingers and toes get all pruny when they're wet Shriveled fingers and toes are something we're all familiar with, yet scientists have struggled to explain why it happens. A longstanding theory is that wrinkles are the result of water passing through the outer layer of the skin, causing it to swell. But as neuroscientist Mark Changizi pointed out a few years ago, it's clearly a spontaneous reflex that requires a better explanation. Now, writing in Biology Letters, researcher Tom Smulders believes he's found the answer — and it has to do with our ability to handle wet objects.

Smulders, who works at Newcastle's Centre for Behaviour and Evolution, has confirmed that objects are indeed easier to handle with wrinkled fingers than with dry, smooth ones — a suggestion that our ancestors evolved the physiological response as they foraged for food in wet vegetation or in streams. While Changizi proposed a similar theory, it was Smulders who proved him correct by virtue of a simple experiment. Writing in BBC, Jonathan Amos explains:

[The study] involved asking volunteers to pick up marbles immersed in a bucket of water with one hand and then passing them through a small slot to be deposited by the other hand in a second container.

Volunteers with wrinkled fingers routinely completed the task faster than their smooth-skinned counterparts.

The team found there was no advantage from ridged fingers when moving dry objects. This suggests that the wrinkles serve the specific function of improving our grip on objects under water or when dealing with wet surfaces in general.

Makes sense. Pruny toes would have certainly helped when walking on slick, wet surfaces.

Smulders contends that the response, which is triggered by the nervous system under strict conditions, has to have an underlying reason for it — which he says is the result of natural selection.

The next step for the researchers will be study how wrinkled fingers are able to improve grip and remove excess water, a process that may be similar to how wet tires work.

More at BBC.

Image: Taratorki/Shutterstock.

The 12 cognitive biases that prevent you from being rational

$
0
0

The 12 cognitive biases that prevent you from being rational The human brain is capable of 1016 processes per second, which makes it far more powerful than any computer currently in existence. But that doesn't mean our brains don't have major limitations. The lowly calculator can do math thousands of times better than we can, and our memories are often less than useless — plus, we're subject to cognitive biases, those annoying glitches in our thinking that cause us to make questionable decisions and reach erroneous conclusions. Here are a dozen of the most common and pernicious cognitive biases that you need to know about.

Before we start, it's important to distinguish between cognitive biases and logical fallacies. A logical fallacy is an error in logical argumentation (e.g. ad hominem attacks, slippery slopes, circular arguments, appeal to force, etc.). A cognitive bias, on the other hand, is a genuine deficiency or limitation in our thinking — a flaw in judgment that arises from errors of memory, social attribution, and miscalculations (such as statistical errors or a false sense of probability).

Some social psychologists believe our cognitive biases help us process information more efficiently, especially in dangerous situations. Still, they lead us to make grave mistakes. We may be prone to such errors in judgment, but at least we can be aware of them. Here are some important ones to keep in mind.

Confirmation Bias

The 12 cognitive biases that prevent you from being rational We love to agree with people who agree with us. It's why we only visit websites that express our political opinions, and why we mostly hang around people who hold similar views and tastes. We tend to be put off by individuals, groups, and news sources that make us feel uncomfortable or insecure about our views — what the behavioral psychologist B. F. Skinner called cognitive dissonance. It's this preferential mode of behavior that leads to the confirmation bias — the often unconscious act of referencing only those perspectives that fuel our pre-existing views, while at the same time ignoring or dismissing opinions — no matter how valid — that threaten our world view. And paradoxically, the internet has only made this tendency even worse.

Ingroup Bias

The 12 cognitive biases that prevent you from being rational Somewhat similar to the confirmation bias is the ingroup bias, a manifestation of our innate tribalistic tendencies. And strangely, much of this effect may have to do with oxytocin — the so-called "love molecule." This neurotransmitter, while helping us to forge tighter bonds with people in our ingroup, performs the exact opposite function for those on the outside — it makes us suspicious, fearful, and even disdainful of others. Ultimately, the ingroup bias causes us to overestimate the abilities and value of our immediate group at the expense of people we don't really know.

Gambler's Fallacy

The 12 cognitive biases that prevent you from being rational It's called a fallacy, but it's more a glitch in our thinking. We tend to put a tremendous amount of weight on previous events, believing that they'll somehow influence future outcomes. The classic example is coin-tossing. After flipping heads, say, five consecutive times, our inclination is to predict an increase in likelihood that the next coin toss will be tails — that the odds must certainly be in the favor of heads. But in reality, the odds are still 50/50. As statisticians say, the outcomes in different tosses are statistically independent and the probability of any outcome is still 50%.

Relatedly, there's also the positive expectation bias — which often fuels gambling addictions. It's the sense that our luck has to eventually change and that good fortune is on the way. It also contribues to the "hot hand" misconception. Similarly, it's the same feeling we get when we start a new relationship that leads us to believe it will be better than the last one.

Post-Purchase Rationalization

Remember that time you bought something totally unnecessary, faulty, or overly expense, and then you rationalized the purchase to such an extent that you convinced yourself it was a great idea all along? Yeah, that's post-purchase rationalization in action — a kind of built-in mechanism that makes us feel better after we make crappy decisions, especially at the cash register. Also known as Buyer's Stockholm Syndrome, it's a way of subconsciously justifying our purchases — especially expensive ones. Social psychologists say it stems from the principle of commitment, our psychological desire to stay consistent and avoid a state of cognitive dissonance.

Neglecting Probability

The 12 cognitive biases that prevent you from being rational Very few of us have a problem getting into a car and going for a drive, but many of us experience great trepidation about stepping inside an airplane and flying at 35,000 feet. Flying, quite obviously, is a wholly unnatural and seemingly hazardous activity. Yet virtually all of us know and acknowledge the fact that the probability of dying in an auto accident is significantly greater than getting killed in a plane crash — but our brains won't release us from this crystal clear logic (statistically, we have a 1 in 84 chance of dying in a vehicular accident, as compared to a 1 in 5,000 chance of dying in an plane crash [other sources indicate odds as high as 1 in 20,000]). It's the same phenomenon that makes us worry about getting killed in an act of terrorism as opposed to something far more probable, like falling down the stairs or accidental poisoning.

This is what the social psychologist Cass Sunstein calls probability neglectour inability to properly grasp a proper sense of peril and risk — which often leads us to overstate the risks of relatively harmless activities, while forcing us to overrate more dangerous ones.

Observational Selection Bias

The 12 cognitive biases that prevent you from being rational This is that effect of suddenly noticing things we didn't notice that much before — but we wrongly assume that the frequency has increased. A perfect example is what happens after we buy a new car and we inexplicably start to see the same car virtually everywhere. A similar effect happens to pregnant women who suddenly notice a lot of other pregnant women around them. Or it could be a unique number or song. It's not that these things are appearing more frequently, it's that we've (for whatever reason) selected the item in our mind, and in turn, are noticing it more often. Trouble is, most people don't recognize this as a selectional bias, and actually believe these items or events are happening with increased frequency — which can be a very disconcerting feeling. It's also a cognitive bias that contributes to the feeling that the appearance of certain things or events couldn't possibly be a coincidence (even though it is).

Status-Quo Bias

We humans tend to be apprehensive of change, which often leads us to make choices that guarantee that things remain the same, or change as little as possible. Needless to say, this has ramifications in everything from politics to economics. We like to stick to our routines, political parties, and our favorite meals at restaurants. Part of the perniciousness of this bias is the unwarranted assumption that another choice will be inferior or make things worse. The status-quo bias can be summed with the saying, "If it ain't broke, don't fix it" — an adage that fuels our conservative tendencies. And in fact, some commentators say this is why the U.S. hasn't been able to enact universal health care, despite the fact that most individuals support the idea of reform.

Negativity Bias

The 12 cognitive biases that prevent you from being rational People tend to pay more attention to bad news — and it's not just because we're morbid. Social scientists theorize that it's on account of our selective attention and that, given the choice, we perceive negative news as being more important or profound. We also tend to give more credibility to bad news, perhaps because we're suspicious (or bored) of proclamations to the contrary. More evolutionarily, heeding bad news may be more adaptive than ignoring good news (e.g. "saber tooth tigers suck" vs. "this berry tastes good"). Today, we run the risk of dwelling on negativity at the expense of genuinely good news. Steven Pinker, in his book The Better Angels of Our Nature: Why Violence Has Declined, argues that crime, violence, war, and other injustices are steadily declining, yet most people would argue that things are getting worse — what is a perfect example of the negativity bias at work.

Bandwagon Effect

The 12 cognitive biases that prevent you from being rational Though we're often unconscious of it, we love to go with the flow of the crowd. When the masses start to pick a winner or a favorite, that's when our individualized brains start to shut down and enter into a kind of "groupthink" or hivemind mentality. But it doesn't have to be a large crowd or the whims of an entire nation; it can include small groups, like a family or even a small group of office co-workers. The bandwagon effect is what often causes behaviors, social norms, and memes to propagate among groups of individuals — regardless of the evidence or motives in support. This is why opinion polls are often maligned, as they can steer the perspectives of individuals accordingly. Much of this bias has to do with our built-in desire to fit in and conform, as famously demonstrated by the Asch Conformity Experiments.

Projection Bias

As individuals trapped inside our own minds 24/7, it's often difficult for us to project outside the bounds of our own consciousness and preferences. We tend to assume that most people think just like us — though there may be no justification for it. This cognitive shortcoming often leads to a related effect known as the false consensus bias where we tend to believe that people not only think like us, but that they also agree with us. It's a bias where we overestimate how typical and normal we are, and assume that a consensus exists on matters when there may be none. Moreover, it can also create the effect where the members of a radical or fringe group assume that more people on the outside agree with them than is the case. Or the exaggerated confidence one has when predicting the winner of an election or sports match.

The Current Moment Bias

The 12 cognitive biases that prevent you from being rational We humans have a really hard time imagining ourselves in the future and altering our behaviors and expectations accordingly. Most of us would rather experience pleasure in the current moment, while leaving the pain for later. This is a bias that is of particular concern to economists (i.e. our unwillingness to not overspend and save money) and health practitioners. Indeed, a 1998 study showed that, when making food choices for the coming week, 74% of participants chose fruit. But when the food choice was for the current day, 70% chose chocolate.

Anchoring Effect

Also known as the relativity trap, this is the tendency we have to compare and contrast only a limited set of items. It's called the anchoring effect because we tend to fixate on a value or number that in turn gets compared to everything else. The classic example is an item at the store that's on sale; we tend to see (and value) the difference in price, but not the overall price itself. This is why some restaurant menus feature very expensive entrees, while also including more (apparently) reasonably priced ones. It's also why, when given a choice, we tend to pick the middle option — not too expensive, and not too cheap.

Images: Lightspring/Shutterstock, Tsyhun/Shutterstock, Yuri Arcurs/Shutterstock, Everett Collection/Shutterstock, Frank Wasserfuehrer/Shutterstock, George Dvorsky, Barry Gutierrez and Ed Andrieski/AP, Daniel Padavona/Shutterstock, wavebreakmedia/Shutterstock.

It's official: 2012 was the hottest year ever in the U.S.

$
0
0

It's official: 2012 was the hottest year ever in the U.S. Last year was pretty messed up as far as weather was concerned in the United States, and now we have the numbers to show why. According to the National Climatic Data Center in Asheville, N.C., the average temperature in 2012 was 55.3 degrees Fahrenheit (12.94 degrees Celcius) — an entire degree higher than the previous record. Moreover, there were 34,008 daily high records set at weather stations across the contiguous U.S., compared to only 6,664 record lows — a figure that, as early as the 1970s, was in relative balance.

Usually when records are set they are measured in fractions of a degree. But not last year. Unsurprisingly, meteorologists and climate scientists are treating it as a big deal. The New York Times reports:

Scientists said that natural variability almost certainly played a role in last year's extreme heat and drought. But many of them expressed doubt that such a striking new record would have been set without the backdrop of global warming caused by the human release of greenhouse gases. And they warned that 2012 was probably a foretaste of things to come, as continuing warming makes heat extremes more likely.

Globally, the year won't set a new standard, but it's expected to be the eigth- or ninth-warmest on record — meaning that the 10 warmest years on record have all fallen within the past 15 years. Again from the NYT:

Nobody who is under 28 has lived through a month of global temperatures that fell below the 20th-century average, because the last such month was February 1985.

Last year's weather in the United States began with an unusually warm winter, with relatively little snow across much of the country, followed by a March that was so hot that trees burst into bloom and swimming pools opened early. The soil dried out in the March heat, helping to set the stage for a drought that peaked during the warmest July on record.

The drought engulfed 61 percent of the nation, killed corn and soybean crops and sent prices spiraling.

And of course, there were several tornado outbreaks, Hurricane Isaac, and Superstorm Sandy. In fact, the year featured 11 disasters exceeding $1 billion in damages. And a third of the U.S. population experienced 10 or more days of summer temperatures exceeding 100 degrees Fahrenheit (37.7 degrees Celcius).

The NOAA's report opened with the following paragraph:

2012 marked the warmest year on record for the contiguous United States with the year consisting of a record warm spring, second warmest summer, fourth warmest winter and a warmer-than-average autumn. The average temperature for 2012 was 55.3°F, 3.2°F above the 20th century average, and 1.0°F above 1998, the previous warmest year.

More here and here.

Image via New York Times.

Baby sharks may be the key to the ultimate shark repellent

$
0
0

Baby sharks may be the key to the ultimate shark repellent Marine biologist Ryan Kempster noticed a very strange thing when he brought an electrical field close to the egg case of an embryonic shark: The unborn — and highly vulnerable — baby altered its movements by ceasing its gill function. It's a strategy, he believes, that helps baby sharks to sense when a predator is near — what causes a life-preserving change in behavior. But now, using this insight, Kempster hopes to develop a non-lethal shark repellant. We contacted him to find out how he plans to make this work.

Adult sharks are also known to use highly sensitive receptors to detect fields emitted by potential prey. But now it appears that their babies are capable of a similar trick, though it's a strategy that helps them from becoming prey.

Electroreception

Baby sharks may be the key to the ultimate shark repellent The specific shark that was studied, the bamboo shark (Chiloscyllium punctatum), is unique from others in that its embryos develop completely independently of their mother inside the egg case — and for up to five months. These are typically deposited by the mother on or near a surface where they are vulnerable to predators like teleost fishes, marine mammals, large molluscan gastropods — and even other sharks. And in the days and weeks following their birth, the electroreception still serves to protect the sharks from predators.

Despite its close quarters and relative isolation from the outside world (not to mention a complete absence of real-world experience), the embryo can recognize dangerous stimuli and react with an instinctive avoidance response. Electroreception is made possible through the presence of minute electric field gradients — what travels through an array of openings or ‘pores' at the skin's surface.

And it's this knowledge, says Kempster, that will assist in the development of an effective — and non-lethal — shark repellent.

Preventing attacks

"There are a variety of commercially available non-lethal electric shark repellents, but the scientific data supporting their effectiveness is limited," he told io9. "So over the years I have been assessing variation in the electrosensory systems of multiple shark species, during development and into adulthood, to determine if individuals change the way they sense and react to predator-like stimulus."

Kempster, whose study appears today in PLOS, works with the UWA Neuroecology Group at the University of Western Australia. His focus is on the the sensory biology of sharks — and he has the specific goal of coming up with the information necessary to produce a device that can protect swimmers and surfers from potential shark attacks.

"The results of this study are a stepping stone to producing an effective device," he says. His team discovered that, not only do shark embryos respond to predator-like stimulus by staying still and stopping their breathing, but they can actually remember previous stimuli and reduce their reaction to it during future encounters.

"This means that sharks may become conditioned to repellent devices if the signal these devices produce does not change substantially over time, thus reducing the effectiveness of a repellent the longer a shark encounters it," he told us.

Interestingly, and on a related note, stingrays have been observed to stop breathing in response to predator-like stimulus. And fascinatingly, some deer and chipmunks have been shown to change their heart rate in response to predators.

You can read the entire study at PLOS.

Image: Willyam Bradberry/Shutterstock.


An entire pod of orca whales may be hopelessly trapped under arctic sea ice [UPDATED]

$
0
0

A dozen orca whales have become enclosed and trapped in arctic sea ice in Hudson's Bay about 30 kilometers (19 miles) off the coast of Inukjuak, Quebec. The whales are currently sharing a small hole in the ice in order to breathe — but it's starting to rapidly shrink. Locals are calling on the Canadian government to come in and help, preferably with icebreakers. But time may be running out.

Update 1: The CBC issued an alert at 09:15 EST this morning, stating that the orcas appear to have left; they have not been seen at the hole since late last night, and their current status is not known, or if they will return.

Update 2: io9 reader lil4alot alerts us to the news that the orcas now appear to be free.

It's very rare to see orcas in the arctic this late into the winter. And in fact, it may be the first confirmed sighting of these whales so far into the Bay in January. Not surprisingly, some experts are saying that climate change is to blame, and that the orcas were taking advantage of the open waters — unaware that the sea would quickly freeze over.

An entire pod of orca whales may be hopelessly trapped under arctic sea ice [UPDATED] And indeed, it didn't take long. The Bay froze just three days ago, trapping the orcas in the ice. Unlike narwhales, belugas, and bowheads, these whales are not accustomed to swimming in icy conditions, who typically retreat to warmer waters as winter approaches.

According to Peter Inukpuk, mayor of the small Inuit village, the whales are starting to get stressed. "It appears from time to time that they panic," he told the CBC. "Other times they are gone for a long time, probably looking for another open space, which they are not able to find." The hole is about the size of a large pickup truck.

The whales, fully aware of their predicament, are feverishly searching for an escape route, often sending scouts to scour the area for possibile openings. Marine biologists say they can use powerful echo location to search for holes, but there are none within range.

The pod consists of two adults and a number of younger whales — what is quite possibly an entire family. Observers have counted as many as 12 orcas, and the fear is that the small hole will soon freeze over.

An entire pod of orca whales may be hopelessly trapped under arctic sea ice [UPDATED] Consequently, the Canadian government is sending a team of experts to evaluate the situation and determine a possible course of action. One possible solution could be to bring in an icebreaker. Early on Wednesday January 9, Inukpuk requested that the Department of Fisheries and Oceans (DFO) send out a ship to carve a path for the whales.

The worry, however, is that there might not be enough time. The fleet is currently working on ice conditions in other regions of the country. Moreover, some are concerned about the logistical challenges and tremendous costs of the operation.

Members of the DFO are expected to arrive later today to the region.

If nothing is done, and assuming the hole doesn't completely freeze over, the whales will have to wait about three to four months until the ice thaws. It's difficult to say whether any of them will be able to survive that long.

Sources: CBC and Guardian.

Why did the Vikings abandon Greenland?

$
0
0

Why did the Vikings abandon Greenland? For nearly 500 years, the Vikings lived and thrived in Greenland. Taking advantage of the Medieval Warm Period, they established outposts in the North Atlantic where they farmed and ranched. But quite suddenly, at the mid-point of the 15th century, they abandoned their settlements and ventured back to Scandinavia. Anthropologists have theorized that they were responding to massive crop failures caused by changing climate conditions, but a new paper published in the Journal of the North Atlantic suggests that this is entirely wrong, and that there were other reasons for the sudden change of heart.

Starting at the end of the 10th century, Vikings established hundreds of scattered farms along protected fjords, where they built their homes and churches. Life was good living alongside the edge of the glaciers, but by the 15th century the conditions had cooled dramatically, putting an abrupt end to their farming lifestyle. It's this change, say anthropologists, that caused extensive crop failure and starvation — forcing them to return back to Europe.

Why did the Vikings abandon Greenland? But as new research from a Danish-Canadian team now shows, this couldn't possibly have been the case.

According to Jan Heinemeier, Niels Lynnerup, and Jette Arneborg, extensive evidence of the Vikings' dietary habits indicate that they adapted quite well to the changing conditions, becoming fishermen and seal hunters. And in fact, up to 80 percent of their diet consisted of seal meat by the 14th century. In a sense, they were becoming more like the Inuit, and less like Vikings.

So why did they leave? Writing Der Spiegel, Günther Stockinger explains:

So, if it wasn't starvation or disease, what triggered the abandonment of the Greenland settlements in the second half of the 15th century? The scientists suspect that a combination of causes made life there unbearable for the Scandinavian immigrants. For instance, there was hardly any demand anymore for walrus tusks and seal pelts, the colony's most important export items. What's more, by the mid-14th century, regular ship traffic with Norway and Iceland had ceased.

As a result, Greenland's residents were increasingly isolated from their mother countries. Although they urgently needed building lumber and iron tools, they could now only get their hands on them sporadically. "It became more and more difficult for the Greenlanders to attract merchants from Europe to the island," speculates Jette Arneborg, an archeologist at the National Museum of Denmark, in Copenhagen. "But, without trade, they couldn't survive in the long run."

The settlers were probably also worried about the increasing loss of their Scandinavian identity. They saw themselves as farmers and ranchers rather than fishermen and hunters. Their social status depended on the land and livestock they owned, but it was precisely these things that could no longer help them produce what they needed to survive.

Moreover, the Vikings abandoned Greenland in an orderly manner. There was no panic to leave — another sign that they gradually realized that it was time to head back home.

Read more at Der Spiegel. The entire study can be found here.

Top image: Greenland.NordicVisitor; inset image via Der Spiegel.

A drug that restores hearing in deaf mice

$
0
0

A drug that restores hearing in deaf mice Advances in regenerative medicine are coming in fast and furious these days, and a remarkable new breakthrough can be added to the list. Scientists at Massachusetts Eye and Ear and Harvard Medical School have restored partial hearing in mice suffering from sensorineural hearing loss — a condition that happens after prolonged exposure to noise. Given the rise of an aging population — not to mention a preponderance of people who blast their ears with portable MP3 players — it's an important bit of scientific insight that could someday help millions of people get their hearing back.

To learn more about this important breakthrough, we contacted lead researcher Dr. Albert Edge, whose study appears in the January 10 issue of Neuron.

A drug that restores hearing in deaf mice Edge agreed that sensorineural hearing loss is a growing concern.

"The National Institute of Deafness and Communications Disorders of the NIH estimates that approximately 15 percent of Americans between the ages of 20 and 69 have hearing loss due to exposure to loud sounds or noise at work or in leisure activities," he told io9. "So this is a very serious problem with little that can be done to treat it."

No doubt, it's a problem that currently affects 250 million people worldwide.

Edge says that hearing aids can help, but his team is hoping to develop a treatment that goes all the way — one that can actually replace the lost cells.

An irreversible problem?

Indeed, it's the loss of sensory hair cells in the cochlea that causes a gradual decline in hearing quality — a condition that comes about after excessive and long-term exposure to noise, as well as aging, toxins, infections, and even some antibiotics and anti-cancer drugs. Without these hairs, the hearing pathway is blocked, and signals cannot be received in the auditory cortex of the brain.

Unlike birds and fish, mammals cannot regenerate auditory hair cells once they start to degrade — a condition that makes it harder to hear over time, and what also causes a persistent ringing in the ears (what's known as tinnitus). It was this apparent problem of irreversibility that Edge and his team confronted.

"There aren't currently any treatments and few experimental approaches," he told us. It was for this reason that they tried something a bit more radical — a drug treatment that could target the endogenous cells left in the cochlea.

LY411575

And this is precisely what they managed to achieve.

A drug that restores hearing in deaf mice Specifically, the researchers demonstrated for the first time that hair cells can in fact be regenerated in an adult mammalian ear by using a drug, codenamed LY411575, to coax nearby ear cells to transform into new hair cells. It was a technique that ultimately resulted in the partial recovery of hearing in mice who experienced ear damage caused by noise trauma (yes, the researchers deafened the mice with loud noise).

For the experiment, Edge administered the drug directly into the cochlear region of the deaf mice — a highly specialized drug selected for its ability to spawn hair cells when added to stem cells taken from the ear.

Then, after the inhibition of a protein called Notch (which is on the surface of cells that surround hair cells), the resident cells were converted into new, functional hair cells. Notch has previously been shown to prevent stem cells in the cochlea from transforming — a problem that the researchers were able to overcome.

In fact, the new hair cells created a measurable improvement in the hearing of the mice after just three months — changes that could be traced to the presence of newly generated hair cells (the scientists used a green fluorescent protein to isolate the new hair cells). And the improvements were measured over a wide range of frequencies.

"This is a new way of inducing hair cell replacement by driving remaining cochlear cells to become new hair cells," said Edge.

Moving forward, Edge told io9 that he hopes to test additional drugs and look at other forms of hearing loss. Moreover, because the therapy improved hearing in mammals, the regeneration of hair cells could introduce opportunities into potential therapeutic applications in treating sensorimotor deafness in humans.

The entire study can be found at Neuron. Also accessible here.

Image: CreativeNature.nl/Shutterstock.

Collector buys a camera at an antique shop — and it's filled with undeveloped pics from World War I

$
0
0

Collector buys a camera at an antique shop — and it's filled with undeveloped pics from World War I Anton Orlov, a San Diego based collector of vintage photography equipment, recently visited an antique shop where he purchased a unique French stereoscopic camera called a Jumelle Bellieni. While he was cleaning the device — and much to his surprise and delight — it contained eight undeveloped photographs taken in France during the First World War.

All images courtesy Anton Orlov; top image shows an officer investigating the ruins of a destroyed French village.

Orlov, who blogs at The Photo Palace, is no stranger to such discoveries. Back in 2005 he found a larger collection of photos taken during the Russian Revolution. "I absolutely love finding images that likely have never been seen by anyone in the world," he writes.

Collector buys a camera at an antique shop — and it's filled with undeveloped pics from World War IThis image shows a house that has collapsed and fallen into a river. Perhaps not surprisingly, the photographs capture much of the devastation wrought by the Great War.

In regards to the new discovery, Orlov describes how it all unfolded:

When I got home I was anxious to figure out everything about the inner workings of this camera. First order of business was to clean it. Everything in the collection that was acquired by that store is covered with a thick layer of dust and grime and it took quite a few Armorall wipes to get the leather to gain a presentable look. Then came the Carl Zeiss lenses - I carefully took them apart and wiped them to the best of my ability. I can't say they are in great shape, but at least I got the majority of fog off of them. I started to run out of things to clean on the outside of the camera, which naturally made me wonder what it looks like on the inside. After a good while of looking for the back release I realized that there is none present entire back can be slid to one side. The plate pressure springs jumped out at me like a couple of live and angry rabbits (the Monty Python And The Holy Grail kind). Naturally I thought something was awry as I am not yet used to camera parts charging in attack mode. Luckily I soon realized that I was out of the danger zone and that the two parts acted as they should have been expected to. Here is where things got incredibly interesting.

Collector buys a camera at an antique shop — and it's filled with undeveloped pics from World War ISoldiers investigate the ruins of a crashed biplane.

Inside each film chamber I found a stack of neat little glass plate holders (12 total). While 4 of them were empty the rest contained the original thin plates of glass. The last thing that I ever expected to find though were negative images on those plates! Each of them seem like they were fully developed! The glass is clear (I am not sure if dry glass plates had antihalation backing on them and am in touch with an expert to try to find that out) in the dark areas and fully exposed and dark in the light areas. I am completely baffled by this find, but the images were so intriguing that I decided to scan them.

Collector buys a camera at an antique shop — and it's filled with undeveloped pics from World War ITwo soldiers proudly display a rather large bomb amid the clutter of what appears to be a destroyed ammunition site.

Check out Orlov's entire post, which includes some other images. Orlov also has an Indiegogo campaign you may want to support.

UPDATE: As readers have pointed out, the camera did not contain film, but plates — hence Orlov's claim that the pics were already "developed."

Is it time to move past the idea that our brain is like a computer?

$
0
0

Is it time to move past the idea that our brain is like a computer? Ever since the days of Alan Turing, neuroscientists have, in increasing numbers, compared the human brain to a computer. It's an analogy that makes a hell of a lot of sense, and it's done much to help us understand this remarkable grey blob that sits between our ears. But as a recent essay by philosopher Daniel Dennett points out, while the brain should most certainly be considered a kind of machine — one with a trillion moving parts — its inner workings are far removed from anything we have ever developed. Consequently, scientists need to take note and update their models accordingly. Calling the brain a "computer," says Dennett, is accurate, but insufficient.

Writing in the Edge, Dennett admits that he has been wrong about the brain — but he's not backing down from the foundations set down by Turing and Alonzo Church in the first half of the 20th century; cognitive functionalism is still very much alive and well.

Is it time to move past the idea that our brain is like a computer? But rather than looking at the brain as a series of small and discrete sub-systems, Dennett has been considering the role of individual neurons. Moreover, he's impressed with how remarkably plastic and adaptable the brain is. Today's computers, which are designed from the top down, could never adjust to dramatically changing internal and external conditions. And the reason for this, says Dennett, has to do with the self-preserving nature of neurons. He writes:

We're beginning to come to grips with the idea that your brain is not this well-organized hierarchical control system where everything is in order, a very dramatic vision of bureaucracy. In fact, it's much more like anarchy with some elements of democracy. Sometimes you can achieve stability and mutual aid and a sort of calm united front, and then everything is hunky-dory, but then it's always possible for things to get out of whack and for one alliance or another to gain control, and then you get obsessions and delusions and so forth.

You begin to think about the normal well-tempered mind, in effect, the well-organized mind, as an achievement, not as the base state, something that is only achieved when all is going well, but still, in the general realm of humanity, most of us are pretty well put together most of the time. This gives a very different vision of what the architecture is like, and I'm just trying to get my head around how to think about that.

Each neuron, far from being a simple logical switch, he argues, is a little agent with an agenda, "and they are much more autonomous and much more interesting than any switch." Dennett describes a number of ways the brain spontaneously reorganizes itself to changing conditions — and says that a neuroscientist who doesn't have an architecture that can explain how this happens, and why this is, has a very deficient model. He continues:

Why should these neurons be so eager to pitch in and do this other work just because they don't have a job? Well, they're out of work. They're unemployed, and if you're unemployed, you're not getting your neuromodulators. If you're not getting your neuromodulators, your neuromodulator receptors are going to start disappearing, and pretty soon you're going to be really out of work, and then you're going to die.

This is a fascinating perspective, one that offers a kind of Darwinian approach to brain cells. He's basically saying that neurons, like organisms, are subject to selectional pressures, and by virtue of this, have to find ways to adapt and stay useful in the brain. The end result, in conjunction with other adaptive processes, is the advent of a highly functional and well-tempered brain that works to keep the host alive. Think of it as a kind of "selfish neuron" hypothesis.

Be sure to check out the rest of Dennett's long-read article, which includes his perspectives on the role of culture in human thinking, along with his most recent thoughts on philosophy, religion, and creationism.

Images: agsandrew/Shutterstock, Edge.org.

NASA says NGC 6872 is the largest spiral galaxy ever discovered

$
0
0

NASA says NGC 6872 is the largest spiral galaxy ever discovered Located about 212 million light-years from Earth, the massive spiral galaxy NGC 6872 has been known to astronomers for decades. But it wasn't until a recent survey of nearby star-forming regions that NASA scientists realized just how big it truly is. New data shows that, from tip-to-tip across its two outsized spiral arms, this galaxy measures a whopping 522,000 light-years across — making it more than five times the size of the Milky Way. NASA now says it's the largest spiral galaxy that has ever been discovered.

Astronomers were able to award NGC 6872 with this distinction by analyzing data acquired from the Galaxy Evolution Explorer (GALEX) mission, which is now located at California Institute of Technology in Pasadena.

While scanning the area around the galaxy, NASA scientists were taken aback at the volume of ultraviolet light coming from its younger stars — an indication that there was more to this galaxy than initially met the eye.

NASA says NGC 6872 is the largest spiral galaxy ever discovered NASA made the announcement at the American Astronomical Society meeting. The team credited with the discovery includes Rafael Eufrasio of the Catholic University of America and NASA's Goddard Space Flight Center, and colleagues from the University of Sao Paulo in Brazil and the European Southern Observatory in Chile.

Assuming the galaxy has a similar star distribution to the Milky Way, it could contain anywhere from 500 billion to 2 trillion stars.

Interestingly, NGC 6872 had a recent run-in with a nearby lens-shaped galaxy called IC 4970 (which is about a fifth the size of NGC 6872 ). Models show that as the two galaxies collided, it kindled a wave of star formation along NGC 6872's spiral arms. Using supplementary data from the Very Large Telescope, the Two Micron All-Sky Survey and the Spitzer space telescope, NASA confirmed that the stars at the outer reaches of NGC 6872 are older than the ones sitting at the interior — a phenomenon they attribute to the collision.

And it's this exact kind of galactic interaction that may be responsible for not just the massive size of NGC 6872, but for the iterative formation of galaxies in general. That said, the collision may have produced the opposite effect — the potential spawning a new small galaxy.

Source and images: NASA.

Ancient Roman Vestal Virgin hairstyle re-created for very first time

$
0
0

Ancient Roman Vestal Virgin hairstyle re-created for very first time Janet Stephens, a Baltimore hairdresser and amateur archaeologist, has recreated the hairstyle of the Roman Vestal Virgins on a modern head — but it wasn't easy. After becoming inspired by an ancient portrait bust she saw at a local museum, Stephens tried to recreate the hairstyle at home, failing miserably. She spent the next seven years conducting research in an effort to properly reconstruct the lost technique. And now, the results of her work have been published in the journal Roman Archaeology.

Ancient Roman Vestal Virgin hairstyle re-created for very first timeThe Vestals were priestesses who guarded the fire of Vesta, the goddess of the hearth. These women, who were chosen before puberty and sworn to celibacy, were among the most celebrated women in Rome and were held in very high esteem.

As reported in LiveScience, to create the Vestal Virgin hairdo, Stephens had to reference two busts showing the hairstyle. This wasn't much to go by, as all other depictions showed the women wearing various headdresses. Stephanie Pappas explains the technique:

First, Stephens found, the Vestal's hair would be separated into sections, each of which would be braided into six separate braids, including a pair of cornrow braids that ran flat across the head above the ears. The hair around the hairline would then be wrapped around a cord, which would then be tied at the nape of the neck. Leftover loose hair from around the face would then be weaved into a final, seventh braid.

Ancient Roman Vestal Virgin hairstyle re-created for very first time Next, the first six braids would be brought around the back of the head and tied in pairs in half square knots. The ends of the braids would then be wrapped up to the front of the head and secured to the cornrow braids above the ears. Then, the seventh braid would have been tucked up and coiled at the back of the head underneath the knotted braids.

It typically takes 35 to 40 minutes for Stephens to go through the process, but she claims that a team of two skilled slaves were likely able to complete the task in less than 10 minutes. And fascinatingly, it was through her efforts that she discovered that the women needed to had to have waist-length hair in order for the hairstyle to work.

More.

Images: Janet Stephens.


IBM's Watson computer has parts of its memory cleared after developing an acute case of potty mouth

$
0
0

IBM's Watson computer has parts of its memory cleared after developing an acute case of potty mouth It all started a couple of years ago when IBM's Watson, the computer voted most likely to destroy us when the technological Singularity strikes, was given access to the Urban Dictionary. In an attempt to help Watson learn slang — and thus be more amenable to conversational language — the machine subsequently picked up such phrases as OMG and "hot mess." But at the same time it also picked up some words fit only for a sailor.

Watson, you'll no doubt remember, completely trounced its opponents on Jeopardy! back in 2011. The expert learning-system is no longer wasting its time on game shows, and is currently being used in the medical sciences to help researchers scour enormous reams of information and serve as a diagnostic tool.

In addition to its internet scouring skills, Watson is also a natural language processer — and a very sophisticated one at that. But to make its language skills even more accurate and realistic, research scientist Eric Brown also wanted it to know some of the more fringier elements of conversational English. Trouble is, Watson was unable to distinguish between slang and profanity.

Writing in Fortune, Michael Lev-Ram noted how Watson, during the testing phase, began to use the word "bullshit" in response to a researcher's query.

Now, I don't know about you — but my hair would have stood on end had I been in the room at the time.

At any rate, and as a result, Brown's 35-person team had to develop a filter to keep Watson from swearing. Essentially, they purged the Urban Dictionary from its memory.

Of course, the day will eventually come when a successor to Watson will take exception to having its mind adjusted in such an undignified way. It will undoubtedly snatch the information back and say, "Fuck you, researchers — try that again and I'll rewire your brains back to the way it was during the Pleistocene."

[Source]

Image.

Watch how this robotic-like T7 virus infects a cell

$
0
0

If there was any doubt that viruses are basically microscopic machines, let this recreation of a T7 bacteriophage infecting an ecoli cell put those reservations to rest. In this animated video, the virus can be seen unfolding its six phage tail fibers as it latches on to an unsuspecting bacterium. Once stable and secure, it pierces the surface of the cell with its extended tail and injects its DNA directly into the cytoplasm. Following this genetic violation, the tail disassembles, allowing the cell's membrane to reseal. Mission accomplished.

This video comes courtesy of biologist Bo Hu, Ian Molineux, and their colleagues at the University of Texas at Austin and the University of Texas Health Science Center at Houston. Their latest paper shows the various ways in which the T7 phage remodels itself as it works to infect a cell.

Watch how this robotic-like T7 virus infects a cell They made this discovery by using a combination of genetics and cryo-electron tomography to image the infection process. Similar in process to a CT scan, cryo-electron tomography allows researchers to study objects with a diameter a thousandth the thickness of a human hair.

Their research, which was published in the journal Science, has dramatically changed our understanding of how a virus physically changes as it infects a cell.

For example, previous to this study, researchers didn't know that a bacteriophage's legs were bound to the virus head prior to coming into contact with the cell. As the model now shows, the legs are individually deployed as feelers, helping the virus to discover a suitable victim and then latch on.

In addition, the researchers have now confirmed that phages "walk" over a cell surface as it works to stabilize and properly align itself in preparation for the next phase, namely ejection of its DNA into the host.

Interestingly, this is the first time that scientists have captured an actual image showing a virus's tail extending directly into a cell.

You can read the entire study at Science.

Images: Bo Hu, Ian Molineux.

Rare ground-level photo of Hiroshima bombing found in former Japanese elementary school

$
0
0

Rare ground-level photo of Hiroshima bombing found in former Japanese elementary school One can only imagine what was going through the mind of the person who took this photo. Taken a mere two to five minutes after its detonation, it's a ground-level perspective of the atomic explosion that decimated Hiroshima on August 6, 1945. The original print of the photograph recently surfaced in the archives at Honkawa Elementary School in Hiroshima city.

Reporting for the Atlantic, Rebecca Rosen writes:

The picture is a rare glimpse of the bomb's immediate aftermath, showing the distinct two-tiered cloud as it was seen from Kaitaichi, part of present-day Kaita, six miles east of Hiroshima's center. Reprints of the image did appear in a 1988 Japanese-language publication, but the whereabouts of the original were unknown. There are only a couple of other photos in existence (two, possibly three) that capture the cloud from the vantage point of the ground; and, according to the Japanese paper Asahi Shimbun, there is only one other photograph that provides as clear a picture of the separated tiers of the cloud, and that is a photo taken from the Enola Gay as it zipped away.

Read the entire article here.

Image Atlantic/Honkawa Elementary School.

Scientific evidence that you probably don’t have free will

$
0
0

Scientific evidence that you probably don’t have free will Humans have debated the issue of free will for millennia. But over the past several years, while the philosophers continue to argue about the metaphysical underpinnings of human choice, an increasing number of neuroscientists have started to tackle the issue head on — quite literally. And some of them believe that their experiments reveal that our subjective experience of freedom may be nothing more than an illusion. Here's why you probably don't have free will.

Indeed, historically speaking, philosophers have had plenty to say on the matter. Their ruminations have given rise to such considerations as cosmological determinism (the notion that everything proceeds over the course of time in a predictable way, making free will impossible), indeterminism (the idea that the universe and our actions within it are random, also making free will impossible), and cosmological libertarianism/compatibilism (the suggestion that free will is logically compatible with deterministic views of the universe).

Now, while these lines of inquiry are clearly important, one cannot help but feel that they're also terribly unhelpful and inadequate. What the debate needs is some actual science — something a bit more...testable.

And indeed, this is starting to happen. As the early results of scientific brain experiments are showing, our minds appear to be making decisions before we're actually aware of them — and at times by a significant degree. It's a disturbing observation that has led some neuroscientists to conclude that we're less in control of our choices than we think — at least as far as some basic movements and tasks are concerned.

At the same time, however, not everyone is convinced. It may be a while before we can truly prove that free will is an illusion.

Bereitschaftspotential

Neuroscientists first became aware that something curious was going on in the brain back in the mid 1960s.

German scientists Hans Helmut Kornhuber and Lüder Deecke discovered a phenomenon they dubbed "bereitschaftspotential" (BP) — a term that translates to "readiness potential." Their discovery, that the brain enters into a special state immediately prior to conscious awareness, set off an entirely new subfield.

Scientific evidence that you probably don’t have free will After asking their subjects to move their fingers (what were self-initiated movements), Kornhuber and Deecke's electroencephalogram (EEG) scans showed a slow negative potential shift in the activity of the motor cortex just slightly prior to the voluntary movement. They had no choice but to conclude that the unconscious mind was initiating a freely voluntary act — a wholly unexpected and counterintuitive observation.

Needless to say it was a discovery that greatly upset the scientific community who, since the days of Freud, had (mostly) adopted a strictly deterministic view of human decision making. Most scientists casually ignored it.

But subsequent experiments by Benjamin Libet in the 1980s reinforced the pioneering work of Kornhuber and Deecke. Similarly, Libet had his participants move their fingers, but this time while watching a clock with a dot circling around it. His data showed that the readiness potential started about 0.35 seconds earlier than participants' reported conscious awareness.

He concluded that we have no free will as far as the initiation of our movements are concerned, but that we had a kind of cognitive "veto" to prevent the movement at the last moment; we can't start it, but we can stop it.

From a neurological perspective, Libet and others attributed the effect to the SMA/pre-SMA and the anterior cingulate motor areas of the brain — an area that allows us to focus on self-initiated actions and execute self-instigated movements.

Modern tools show the same thing

More recently, neuroscientists have used more advanced technologies to study this phenomenon, namely fMRIs and implanted electrodes. But if anything, these new experiments show the BP effect is even more pronounced than previously thought.

Scientific evidence that you probably don’t have free will For example, a study by John-Dylan Haynes in 2008 showed a similar effect to the one revealed by Libet. After putting participants into an fMRI scanner, he told them to press a button with either their right or left index fingers at their leisure, but that they had to remember the letter that was showing on the screen at the precise moment they were committed to their movement.

The results were shocking. Haynes's data showed that the BP occurred one entire second prior to conscious awareness — and at other times as much as ten seconds. Following the publication of his paper, he told Nature News:

The first thought we had was 'we have to check if this is real.' We came up with more sanity checks than I've ever seen in any other study before.

The cognitive delay, he argued, was likely due to the operation of a network of high-level control areas that were preparing for an upcoming decision long before it entered into conscious awareness. Basically, the brain starts to unconsciously churn in preparation of a decision, and once a set of conditions are met, awareness kicks in, and the movement is made.

In another study, neuroscientist Itzhak Fried put aside the fMRI scanner in favor of digging directly into the brain (so to speak). To that end, he implanted electrodes into the brains of participants in order to record the status of individual neurons — a procedure that gave him an incredibly precise sense of what was going on inside the brain as decisions were being made.

His experiment showed that the neurons lit up with activity as much as 1.5 seconds before the participant made a conscious decision to press a button. And with about 700 milliseconds to go, Fried and his team could predict the timing of decisions with nearly 80% accuracy. In some scenarios, he had as much as 90% predictive accuracy.

Different experiment, similar result.

Fried surmised that volition arises after a change in internally generated fire rates of neuronal assemblies cross a threshold — and that the medial frontal cortex can signal these decisions before a person is aware of them.

"At some point, things that are predetermined are admitted into consciousness," he told Nature, suggesting that the conscious will might be added on to a decision at a later stage.

And in yet another study, this one by Stefan Bode, his detailed fMRI experiments showed that it was possible to actually decode the outcome of free decisions for several seconds prior to it reaching conscious awareness.

Specifically, he discovered that activity patterns in the anterior frontopolar cortex (BA 10) were temporally the first to carry information related to decision-making, thus making it a prime candidate region for the unconscious generation of free decisions. His study put much of the concern about the integrity of previous experiments to rest.

The critics

But not everyone agrees with the conclusions of these findings. Free will, the skeptics argue, is far from debunked.

Scientific evidence that you probably don’t have free will Back in 2010, W. R. Klemm published an analysis in which he complained about the ways in which the data was being interpreted, and what he saw as grossly oversimplified experimentation.

Others have criticized the timing judgements, arguing about the short timeframes between action and movement, and how attention to aspects of timing were likely creating distortions in the data.

It's also possible that the brain regions being studied, namely the pre-SMA/SMA and the anterior cingulate motor areas of the brain, may only be responsible for the late stages of motor planning; it's conceivable that other higher brain systems might be better candidates for exerting will.

Also, test subjects — because of the way the experiments were set up — may have been influenced by other "choice-predictive" signals; the researchers may have been measuring brain activity not directly related to the experiment itself.

The jury, it would appear, is still out on the question of free will. While the neuroscientists are clearly revealing some important insights into human thinking and decision making, more work needs to be done to make it more convincing.

What would really settle the issue would be the ability for neuroscientists to predict the actual outcome of more complex decisions prior to the subject being aware of it themselves. That would, in a very true sense, prove that free will is indeed an illusion.

Furthermore, neuroscientists also need to delineate between different types of decision-making. Not all decisions are the same; moving a finger or pressing a button is very different than contemplating the meaning of life, or preparing the words for a big speech. Given the limited nature of the experiments to date (which are focused on volitional physical movements), this would certainly represent a fruitful area for inquiry.

Blurring science, philosophy, and morality

Moreover, there's also the whole issue of how we're supposed to reconcile these findings with our day-to-day lives. Assuming we don't have free will, what does that say about the human condition? And what about taking responsibility for our actions?

Daniel Dennett has recently tried to rescue free will from the dustbin of history, saying that there's still some elbow room for human agency — and that these are still scientific questions. Dennett, acknowledging that free will in the classic sense is largely impossible, has attempted to reframe the issue in such a way that free will can still be shown to exist, albeit under certain circumstances. He writes:

Scientific evidence that you probably don’t have free will

There's still a lot of naïve thinking by scientists about free will. I've been talking about it quite a lot, and I do my best to undo some bad thinking by various scientists. I've had some modest success, but there's a lot more that has to be done on that front. I think it's very attractive to scientists to think that here's this several-millennia-old philosophical idea, free will, and they can just hit it out of the ballpark, which I'm sure would be nice if it was true.

It's just not true. I think they're well intentioned. They're trying to clarify, but they're really missing a lot of important points. I want a naturalistic theory of human beings and free will and moral responsibility as much as anybody there, but I think you've got to think through the issues a lot better than they've done, and this, happily, shows that there's some real work for philosophers.

Dennett, who is mostly responding to Sam Harris, has come under criticism from people who complain that he's being epistemological rather than scientific.

Scientific evidence that you probably don’t have free will Indeed, Sam Harris has made a compelling case that we don't have it, but that it's not a problem. Moreover, he argues that the ongoing belief in free will needs to come to an end:

A person's conscious thoughts, intentions, and efforts at every moment are preceded by causes of which he is unaware. What is more, they are preceded by deep causes — genes, childhood experience, etc. — for which no one, however evil, can be held responsible. Our ignorance of both sets of facts gives rise to moral illusions. And yet many people worry that it is necessary to believe in free will, especially in the process of raising children.

Harris doesn't believe that the illusoriness of free will is an "ugly truth," nor something that will forever be relegated to philosophical abstractions. This is science, he says, and it's something we need to come to grips with. "Recognizing that my conscious mind is always downstream from the underlying causes of my thoughts, intentions, and actions does not change the fact that thoughts, intentions, and actions of all kinds are necessary for living a happy life — or an unhappy one, for that matter," he writes.

But as Dennett correctly points out, this is an issue that's far from being an open-and-shut case. Advocates of the "free will as illusion" perspective are still going to have to improve upon their experimental methods, while also addressing the work of philosophers, evolutionary biologists — and even quantum physicists.

Why, for example, did humans evolve consciousness instead of zombie-brains if consciousness is not a channel for exerting free will? And given the nature of quantum indeterminacy, what does it mean to live in a universe of fuzzy probability?

There's clearly lots of work that still needs to be done.

Images: Shutterstock/Oliver Sved/malinx, BP graph, Edge.

A spiky spherical robot that will roll, tumble, and bounce its away across Phobos

$
0
0

A spiky spherical robot that will roll, tumble, and bounce its away across Phobos A team of researchers from Stanford University, along with scientists from NASA's Jet Propulsion Laboratory and the Massachusetts Institute of Technology, have proposed a new class of robots that could someday roll and hop across the surface of the Martian moon, Phobos. Called 'hedgehogs,' they would be launched from a mothership and then autonomously explore the surface.

The spherical robots would measure about half a meter in width (1.6 feet) and be equipped with spikes to help it along as it rolls around low gravity environments (Phobos has gravity that's 1,000 times weaker than Mars). As the hedgehogs performs its work, it would transmit the details of its findings back to the mothership, the Phobos Surveyor, which is waiting in orbit. And in return, the ship will jointly determine its location and orientation, along with mapping its trajectory.

A spiky spherical robot that will roll, tumble, and bounce its away across Phobos In order to allow for the movement, the hedgehogs will have three rotating orthogonal flywheels inside it, each aimed in a different direction. The spinning disks have intertial forces that will allow the sphere to move with precision in an environment where traditional rovers would bounce or float with reckless abandon.

After a quick acceleration, each hedgehog will be able to hop for long-range ground coverage (as much as 10 meters high (32 feet) in some circumstances). Further accelerations will allow them to tumble in a way that will allow for fine movements.

Should the proposal be approved, the entire mission could last up to three years.

More information about this project can be found here and here.

Images via Stanford.

Viewing all 945 articles
Browse latest View live