h+ Media http://hplusmagazine.com Elevating the Human Condition Thu, 30 Jul 2015 18:51:02 +0000 en-US hourly 1 The Cellular Origins of Intelligence http://hplusmagazine.com/2015/07/30/the-cellular-origins-of-intelligence/ http://hplusmagazine.com/2015/07/30/the-cellular-origins-of-intelligence/#comments Thu, 30 Jul 2015 18:51:02 +0000 http://hplusmagazine.com/?p=28424 Are You Smarter Than a Reptile? Are you smarter than a reptile?  In many respects, you certainly are. After all, no reptile is going to read this article. However, our clearly superior intellectual abilities for certain skills has seduced us towards a dismissive attitude towards the surprisingly deep and broad range of analytical gifts of our companion creatures. A growing body of research now indicates that other animals of all sizes and varieties are highly intelligent problem solvers within their own realms. After all, their cognitive skills have enabled them to successfully survive for eons and that may not necessarily prove to be true of we humans. Consider termites. They are strikingly social animals and have constructed elaborate societies for 200 million years. They engage in a primitive sort of agriculture, farming varieties of fungus for food. As individuals, they demonstrate remarkable intelligence and an even more surprising group intelligence that enables complicated feats of soil engineering in a diverse range of environments. Within their complex societal structure, termites divide labor between varied types of specialized workers, for example, infant care, manual labor, reproduction or soldiers for the defense of the colony. All of this proceeds via highly evolved and complex patterns of communication and signaling. Individual bees are intelligent and can even solve problems that are mathematically based. For example, they effectively decide the Traveling Salesman dilemma of optimizing the most efficient route to visit large numbers of locations in a single day. Bees communicate in a rich symbolic non-verbal language that enables them to transmit abstract concepts to others such as the location of particular flowers over large distances based on angles of the sun. They even seem to understand some rudimentary concepts of medical care utilizing medications within their hives. For example, honeybees colonies have been demonstrated to self medicate with plant resin to combat fungal infections. What about ants. They’re no slouches. They can navigate long distances to find food and can communicate its location to others with facility. As individuals, they can seek family members, memorize multiple alternate locations and can integrate a large number sources of information. They are even altruistic and will help other ants in distress. Modern research is teaching that intelligence is not directly linked to brain volume. All sizes can be demonstrate high intelligence. Birds have small brains but are terrific problem solvers. They are highly cooperative and exhibit a wide range of highly intelligent behaviors. For example, they use vocal learning. Their songs are a complex language. Did you realize that they give lifelong names to their young? They are even known to mourn the loss of others. Birds also have a gyroscopic sense of geography and can store seeds in thousands of places that they can remember. Can we do that? Perhaps you suppose that only humans are capable of understanding analogies. However, crows can use analogies to solve higher order tasks. They understand sharing, can use rudimentary arithmetic and can invent meanings for words. Cockatoos can solve puzzles with at least 5 steps. They can even keep time to music. Might fish be intellectually impaired? In fact, fish lead complex social lives and are highly intelligent. In a comparison of the intellectual capacity of primates and fish, who do you think should win? In a food test comparing fish with monkeys, chimpanzees and orangutans, it was the fish that proved more adept at learning the advantages of certain patterns of food choices and were faster at it.  And individual fish have personalities. Timid ones stay timid and aggressive ones remain bold. They also demonstrate individually distinguishable levels of curiosity and social ability. Fish can play, have excellent memories and perform complex courtship rituals. And Tusk fish even use tools to open shells for food, an act of intellect, which used to be considered as exclusive to humans but is now known to be widely distributed among species. Certainly then, we must be much smarter than microbes. However, if intelligence is construed as using information to solve problems to successfully reproduce and survive in hostile environments, then they might be considered among the most intelligent. Some bacterial strains and even some viruses have survived essentially unchanged in any significant manner for hundreds of millions of years, in part this by using elaborate signaling patterns for communication among themselves and others. So what might we make of this widely distributed world wide intelligence than had been previously understood? Our intelligence might be of a unique kind, but it is not the only intelligence of consequence on this planet. Ours is just different and suited to the types of problems that we need to solve. We have vastly underestimated the intelligence, feelings and complexity of the inner lives of our companion creatures on this planet. The implications are profound for our relationship towards them and our stewardship of the planet we share. The ubiquity of refined intelligence requires a thorough re-examination of our evolutionary narrative. Intelligence exists at every scope, and scale underscores every aspect of evolutionary development. This emerging understanding teaches us that all cognitive ability starts at the cellular level. All complex creatures must in turn be viewed as integrated collections of intelligent cells, vast collaboratives of cellular intelligence – we in our human package, and they in theirs.   While our form of collective intelligence may be privileged compared to others, it is not different in its essence. As a species, we would do well to grasp this vital truth.    ### Dr. Bill Miller has been a physician in academic and private practice for over 30 years. He is the author of The Microcosm Within: Evolution and Extinction in the Hologenome. He currently serves as a scientific advisor to OmniBiome Therapeutics, a pioneering company in discovering and developing solutions to problems in human fertility and health through management of the human microbiome. For more information please see www.themicrocosmwithin.com.

The post The Cellular Origins of Intelligence appeared first on h+ Media.

]]>
Are You Smarter Than a Reptile?

Are you smarter than a reptile?  In many respects, you certainly are. After all, no reptile is going to read this article. However, our clearly superior intellectual abilities for certain skills has seduced us towards a dismissive attitude towards the surprisingly deep and broad range of analytical gifts of our companion creatures. A growing body of research now indicates that other animals of all sizes and varieties are highly intelligent problem solvers within their own realms. After all, their cognitive skills have enabled them to successfully survive for eons and that may not necessarily prove to be true of we humans.

Consider termites. They are strikingly social animals and have constructed elaborate societies for 200 million years. They engage in a primitive sort of agriculture, farming varieties of fungus for food. As individuals, they demonstrate remarkable intelligence and an even more surprising group intelligence that enables complicated feats of soil engineering in a diverse range of environments. Within their complex societal structure, termites divide labor between varied types of specialized workers, for example, infant care, manual labor, reproduction or soldiers for the defense of the colony. All of this proceeds via highly evolved and complex patterns of communication and signaling.

urface reconstruction of the honeybee standard brain (HSB).

Surface reconstruction of the honeybee standard brain (HSB).

Individual bees are intelligent and can even solve problems that are mathematically based. For example, they effectively decide the Traveling Salesman dilemma of optimizing the most efficient route to visit large numbers of locations in a single day. Bees communicate in a rich symbolic non-verbal language that enables them to transmit abstract concepts to others such as the location of particular flowers over large distances based on angles of the sun. They even seem to understand some rudimentary concepts of medical care utilizing medications within their hives. For example, honeybees colonies have been demonstrated to self medicate with plant resin to combat fungal infections.

Termite Head from https://chemoton.wordpress.com/tag/insects/

What about ants. They’re no slouches. They can navigate long distances to find food and can communicate its location to others with facility. As individuals, they can seek family members, memorize multiple alternate locations and can integrate a large number sources of information. They are even altruistic and will help other ants in distress.

Modern research is teaching that intelligence is not directly linked to brain volume. All sizes can be demonstrate high intelligence. Birds have small brains but are terrific problem solvers. They are highly cooperative and exhibit a wide range of highly intelligent behaviors. For example, they use vocal learning. Their songs are a complex language. Did you realize that they give lifelong names to their young? They are even known to mourn the loss of others. Birds also have a gyroscopic sense of geography and can store seeds in thousands of places that they can remember. Can we do that?

Perhaps you suppose that only humans are capable of understanding analogies. However, crows can use analogies to solve higher order tasks. They understand sharing, can use rudimentary arithmetic and can invent meanings for words. Cockatoos can solve puzzles with at least 5 steps. They can even keep time to music.

Might fish be intellectually impaired? In fact, fish lead complex social lives and are highly intelligent. In a comparison of the intellectual capacity of primates and fish, who do you think should win? In a food test comparing fish with monkeys, chimpanzees and orangutans, it was the fish that proved more adept at learning the advantages of certain patterns of food choices and were faster at it.  And individual fish have personalities. Timid ones stay timid and aggressive ones remain bold. They also demonstrate individually distinguishable levels of curiosity and social ability. Fish can play, have excellent memories and perform complex courtship rituals. And Tusk fish even use tools to open shells for food, an act of intellect, which used to be considered as exclusive to humans but is now known to be widely distributed among species.

Certainly then, we must be much smarter than microbes.

ParameciumHowever, if intelligence is construed as using information to solve problems to successfully reproduce and survive in hostile environments, then they might be considered among the most intelligent. Some bacterial strains and even some viruses have survived essentially unchanged in any significant manner for hundreds of millions of years, in part this by using elaborate signaling patterns for communication among themselves and others.

So what might we make of this widely distributed world wide intelligence than had been previously understood?

  • Our intelligence might be of a unique kind, but it is not the only intelligence of consequence on this planet. Ours is just different and suited to the types of problems that we need to solve.
  • We have vastly underestimated the intelligence, feelings and complexity of the inner lives of our companion creatures on this planet. The implications are profound for our relationship towards them and our stewardship of the planet we share.
  • The ubiquity of refined intelligence requires a thorough re-examination of our evolutionary narrative. Intelligence exists at every scope, and scale underscores every aspect of evolutionary development.
  • This emerging understanding teaches us that all cognitive ability starts at the cellular level. All complex creatures must in turn be viewed as integrated collections of intelligent cells, vast collaboratives of cellular intelligence – we in our human package, and they in theirs.  

While our form of collective intelligence may be privileged compared to others, it is not different in its essence. As a species, we would do well to grasp this vital truth.   

###

Dr. Bill Miller has been a physician in academic and private practice for over 30 years. He is the author of The Microcosm Within: Evolution and Extinction in the Hologenome. He currently serves as a scientific advisor to OmniBiome Therapeutics, a pioneering company in discovering and developing solutions to problems in human fertility and health through management of the human microbiome.

For more information please see www.themicrocosmwithin.com.

The post The Cellular Origins of Intelligence appeared first on h+ Media.

]]>
http://hplusmagazine.com/2015/07/30/the-cellular-origins-of-intelligence/feed/ 0
Artificial Intelligence and Algorithmic Creativity http://hplusmagazine.com/2015/07/30/artificial-intelligence-and-algorithmic-creativity/ http://hplusmagazine.com/2015/07/30/artificial-intelligence-and-algorithmic-creativity/#comments Thu, 30 Jul 2015 18:07:05 +0000 http://hplusmagazine.com/?p=28419 At Rutgers' Art and Artificial Intelligence Laboratory, my colleagues and I proposed a novel algorithm that assessed the creativity of any given painting, while taking into account the painting’s context within the scope of art history. The results show that humans are no longer the only judges of creativity. Computers can perform the same task – and may even be more objective.

The post Artificial Intelligence and Algorithmic Creativity appeared first on h+ Media.

]]>
Which paintings were the most creative of their time? An algorithm may hold the answers

From Picasso’s The Young Ladies of Avignon to Munch’s The Scream, what was it about these paintings that arrested people’s attention upon viewing them, that cemented them in the canon of art history as iconic works?

In many cases, it’s because the artist incorporated a technique, form or style that had never been used before. They exhibited a creative and innovative flair that would go on to be mimicked by artists for years to come.

Throughout human history, experts have often highlighted these artistic innovations, using them to judge a painting’s relative worth. But can a painting’s level of creativity be quantified by Artificial Intelligence (AI)?

At Rutgers’ Art and Artificial Intelligence Laboratory, my colleagues and I proposed a novel algorithm that assessed the creativity of any given painting, while taking into account the painting’s context within the scope of art history.

In the end, we found that, when introduced with a large collection of works, the algorithm can successfully highlight paintings that art historians consider masterpieces of the medium.

The results show that humans are no longer the only judges of creativity. Computers can perform the same task – and may even be more objective.

How is creativity defined?

Of course, the algorithm depended on addressing a central question: how do you define – and measure – creativity?

There is a historically long and ongoing debate about how to define creativity. We can describe a person (a poet or a CEO), a product (a sculpture or a novel) or an idea as being “creative.”

In our work, we focused on the creativity of products. In doing so, we used the most common definition for creativity, which emphasizes the originality of the product, along with its lasting influence.

These criteria resonate with Kant’s definition of artistic genius, which emphasizes two conditions: being original and “exemplary.”

They’re also consistent with contemporary definitions, such as Margaret A Boden’s widely accepted notion of Historical Creativity (H-Creativity) and Personal/Psychological Creativity (P-Creativity). The former assesses the novelty and utility of the work with respect to scope of human history, while the latter evaluates the novelty of ideas with respect to its creator.

Building the algorithm

Using computer vision, we built a network of paintings from the 15th to 20th centuries. Using this web (or network) of paintings, we were able to make inferences about the originality and influence of each individual work.

Through a series of mathematical transformations, we showed that the problem quantifying creativity could be reduced to a variant of network centrality problems – a class of algorithms that are widely used in the analysis of social interaction, epidemic analysis and web searches. For example, when you search the web using Google, Google uses an algorithm of this type to navigate the vast network of pages to identify the individual pages that are most relevant to your search.

Any algorithm’s output depends on its input and parameter settings. In our case, the input was what the algorithm saw in the paintings: color, texture, use of perspective and subject matter. Our parameter setting was the definition of creativity: originality and lasting influence.

The algorithm made its conclusions without any encoded knowledge about art or art history, and made its assessments of paintings strictly by using visual analysis and considering their dates.

Innovation identified

Edvard Munch would be delighted to learn that the algorithm gave The Scream a high score.
Wikimedia Commons

When we ran an analysis of 1,700 paintings, there were several notable findings. For example, the algorithm scored the creativity of Edvard Munch’s The Scream (1893) much higher than its late 19th-century counterparts. This, of course, makes sense: it’s been deemed one of the most outstanding Expressionist paintings, and is one of the most-reproduced paintings of the 20th century.

The algorithm also gave Picasso’s Ladies of Avignon (1907) the highest creativity score of all the paintings it analyzed between 1904 and 1911. This is in line with the thinking of art historians, who have indicated that the painting’s flat picture plane and its application of Primitivism made it a highly innovative work of art – a direct precursor to Picasso’s Cubist style.

The algorithm pointed to several of Kazimir Malevich’s first Suprematism paintings that appeared in 1915 (such as Red Square) as highly creative as well. Its style was an outlier in a period then-dominated by Cubism. For the period between 1916 and 1945, the majority of the top-scoring paintings were by Piet Mondrian and Georgia O’Keeffe.

Of course, the algorithm didn’t always coincide with the general consensus among art historians.

For example, the algorithm gave a much higher score to Domenico Ghirlandaio’s Last Supper (1476) than to Leonardo da Vinci’s eponymous masterpiece, which appeared about 20 years later. The algorithm favored da Vinci’s St John the Baptist (1515) over his other religious paintings that it analyzed. Interestingly, da Vinci’s Mona Lisa didn’t score high by the algorithm.

A graph highlighting certain paintings deemed most creative by the algorithm.
Author provided

Withstanding the test of time

Given the aforementioned departures from the consensus of art historians (notably, the algorithm’s evaluation of da Vinci’s works), how do we know that the algorithm generally worked?

As a test, we conducted what we called “time machine experiments,” in which we changed the date of an artwork to some point in the past or in the future, and recomputed their creativity scores.

We found that paintings from the Impressionist, Post-Impressionist, Expressionist and Cubism movements saw significant gains in their creativity scores when moved back to around AD 1600. In contrast, Neoclassical paintings did not gain much when moved back to 1600, which is understandable, because Neoclassicism is considered a revival of the Renaissance.

Meanwhile, paintings from Renaissance and Baroque styles experienced losses in their creativity scores when moved forward to AD 1900.

We don’t want our research to be perceived as a potential replacement for art historians, nor do we hold the opinion that computers are a better determinant of a work’s value than a set of human eyes.

Rather, we’re motivated by Artificial Intelligence (AI). The ultimate goal of research in AI is to make machines that have perceptual, cognitive and intellectual abilities similar to those of humans.

We believe that judging creativity is a challenging task that combines these three abilities, and our results are an important breakthrough: proof that a machine can perceive, visually analyze and consider paintings much like humans can.

The Conversation###

Ahmed Elgammal is Professor of Computer Vision at Rutgers University.

This article was originally published on The Conversation.
Read the original article.

The post Artificial Intelligence and Algorithmic Creativity appeared first on h+ Media.

]]>
http://hplusmagazine.com/2015/07/30/artificial-intelligence-and-algorithmic-creativity/feed/ 0
How to Study Algorithms: Challenges and Methods http://hplusmagazine.com/2015/07/28/how-to-study-algorithms-challenges-and-methods/ http://hplusmagazine.com/2015/07/28/how-to-study-algorithms-challenges-and-methods/#comments Tue, 28 Jul 2015 22:03:46 +0000 http://hplusmagazine.com/?p=28404 Algorithms are important. They lie at the heart of modern data-gathering and analysing networks, and they are fueling advances in AI and robotics.

The post How to Study Algorithms: Challenges and Methods appeared first on h+ Media.

]]>

Algorithms are important. They lie at the heart of modern data-gathering and analysing networks, and they are fueling advances in AI and robotics. On a conceptual level, algorithms are straightforward and easy to understand — they are step-by-step instructions for taking an input and converting it into an output — but on a practical level they can be quite complex. One reason for this is the two translation problems inherent to the process of algorithm construction. The first problem is converting a task into a series of defined, logical steps; the second problem is converting that series of logical steps into computer code. This process is value-laden, open to bias and human error, and the ultimate consequences can be philosophically significant. I explained all these issues in a recent post.

Granting that algorithms are important, it seems obvious that they should be subjected to greater critical scrutiny, particularly among social scientists who are keen to understand their societal impact. But how can you go about doing this? Rob Kitchin’s article ‘Thinking critically about and researching algorithms’ provides a useful guide. He outlines four challenges facing anyone who wishes to research algorithms, and six methods for doing so. In this post, I wish to share these challenges and methods.

Nothing I say in this post is particularly ground-breaking. I am simply summarising the details of Kitchin’s article. I will, however, try to collate everything into a handy diagram at the end of the post. This might prove to be a useful cognitive aid for people who are interested in this topic.

1. Four Challenges in Algorithm Research
Let’s start by looking at the challenges. As I just mentioned, on a conceptual level algorithms are straightforward. They are logical and ordered recipes for producing outputs. They are, in principle, capable of being completely understood. But in practice this is not true. There are several reasons for this, some are legal/cultural, some are technical. Each of them constitutes an obstacle that the researcher must either avoid or, at least, be aware of.

Kitchin mentions four obstacles in particular. They are:

A. Algorithms can be black-boxed: Algorithms are oftentimes proprietary constructs. They are owned and created by companies and governments, and their precise mechanisms are often hidden from the outside world. They are consequently said to exist in a ‘black box’. We get to see their effects on the real world (what comes out of the box), but not their inner workings (what’s inside the box). The justification for this black-boxing varies, sometimes it is purely about protecting the property rights of the creators, other times it is about ensuring the continued effectiveness of the system. Thus, for example, Google are always concerned that if they reveal exactly how their Pagerank algorithm works, people will start to ‘game the system’, which will undermine its effectiveness. Frank Pasquale wrote an entire book about this black-boxing phenomenon, if you want to learn more.

B. Algorithms are heterogeneous and contextually embedded: An individual could construct a simple algorithm, from scratch, to perform a single task. In such a case, the resultant algorithm might be readily decomposable and understandable. In reality, most of the interesting and socially significant algorithms are not produced by one individual or created ‘from scratch’. They are, rather, created by large teams, assembled out of pre-existing protocols and patchworks of code, and embedded in entire networks of algorithms. The result is an algorithmic system, that is much harder to decompose and understand.

C. Algorithms are ontogenetic and performative: In addition to being contextually embedded, contemporary algorithms are also typically ontogenetic. This is a somewhat jargonistic term, deriving from biology. All it means is that algorithms are not static and unchanging. Once they are released into the world, they are often modified or adapted. Programmers study user-interactions and update code in response. They often experiment with multiple versions of an algorithm to see which one works best. And, what’s more, some algorithms are capable of learning and adapting themselves. This dynamic and developmental quality means that algorithms are difficult to study and research. The system you study at one moment in time may not be the same as the system in place at a later moment in time.

D. Algorithms are out of control: Once they start being used, algorithms often develop and change in uncontrollable ways. The most obvious way for this to happen is if algorithms have unexpected consequences or if they are used by people in unexpected ways. This creates a challenge for the researcher insofar as generalisations about the future uses or effects of an algorithm can be difficult to make if one cannot extrapolate meaningfully from past uses and effects.

These four obstacles often compound one another, creating more challenges for the researcher.

2. Six Methods of Algorithm Research
Granting that there are challenges, the social and technical importance of algorithms is, nevertheless, such that research is needed. How can the researcher go about understanding the complex and contextual nature of algorithm-construction and usage? It is highly unlikely that a single research method will do the trick. A combination of methods may be required.

Kitchin identifies six possible methods in his article, each of which has its advantages and disadvantages. I’ll briefly describe these in what follows:

1. Examining Pseudo-Code and Source Code: The first method is the most obvious. It is to study the code from which the algorithm was constructed. As noted in my earlier post there are two bits to this. First, there is the ‘pseudo-code’ which is a formalised set of human language rules into which the task is translated (pseudocode follows some of the conventions of programming languages but is intended for human reading). Second, there is the ‘source-code’, which is the computer language into which the human language ruleset is translated. Studying both can help the researcher understand how the algorithm works. Kitchin mentions three more specific variations on this research method:

1.1 Deconstruction: Where you simply read through the code and associated documentation to figure out how the algorithm works.

1.2 Genealogical Mapping: Where you ‘map out a genealogy of how an algorithm mutates and evolves over time as it is tweaked and rewritten across different versions of code’ (Kitchin 2014). This is important where the algorithm is dynamic and contextually embedded.

1.3 Comparative Analysis: Where you see how the same basic task can be translated into different programming languages and implemented across a range of operating systems. This can often reveal subtle and unanticipated variations.

There are problems with these methods: code is often messy and requires a great deal of work to interpret; the researcher will need some technical expertise; and focusing solely on the code means that some of the contextual aspects of algorithm construction and usage are missed.

2. Reflexively Producing Code: The second method involves sitting down and figuring out how you might convert a task into code yourself. Kitchin calls this ‘auto-ethnography’, which sounds apt. Such auto-ethnographies can be more or less useful. Ideally, the researcher should critically reflect on the process of converting a task into a ruleset and a computer language, and think about the various social, legal and technical frameworks that shape how they go about doing this. There are obvious limitations to all this. The process is inherently subjective and prone to individual biases and shortcomings. But it can nicely complement other research methods.

3. Reverse-engineering: The third method requires some explanation. As mentioned above, one of the obstacles facing the researcher is that many algorithms are ‘black-boxed’. This means that, in order to figure out how the algorithm works, you will need to reverse engineer what is going on inside the black box. You need to study the inputs and outputs of the algorithm, and perhaps experiment with different inputs. People often do this with Google’s Pagerank, usually in an effort to get their own webpages higher up the list of search results. This method is also, obviously, limited in that it provides incomplete and imperfect knowledge of how the algorithm works.

4. Interviews and Ethnographies of Coding Teams: The fourth method helps to correct for the lack of contextualisation inherent in some of the preceding methods. It involves interviewing or carefully observing coding teams (in the style of a cultural anthropologist) as they go about constructing an algorithm. These methods help the researcher to identify the motivations behind the construction, and some of the social and cultural forces that shaped the engineering decisions. Gaining access to such coding teams may be a problem, though Kitchin notes one researcher, Takhteyev, who conducted a study while he was himself part of an open-source coding team.

5. Unpacking the full socio-technical assemblage: The fifth method is described, again, in somewhat jargonistic terms. The ‘socio-technical assemblage’ is the full set of legal, economic, institutional, technological, bureaucratic, political (etc) forces that shape the process of algorithm construction. Interviews and ethnographies of coding teams can help us to understand some of these forces, but much more is required if we hope to fully ‘unpack’ them (though, of course, we can probably never fully understand a phenomenon). Kitchin suggests that studies of corporate reports, legal frameworks, government policy documents, financing, biographies of key power players and the like are needed to facilitate this kind of research.

6. Studying the effects of algorithms in the real world: The sixth method is another obvious one. Instead of focusing entirely on how the algorithm is produced, and the forces affecting its production, you also need to study its effects in the real world. How does it impact upon the users? What are its unanticipated consequences? There are a variety of research methods that could facilitate this kind of study. User experiments, user interviews and user ethnographies would be one possibility. Good studies of this sort should focus on how algorithms change user behaviour, and also how users might resist or subvert the intended functioning of algorithms (e.g. how users try to ‘game’ Google’s Pagerank system).

Again, no one method is likely to be sufficient. Combinations will be needed. But in these cases one is always reminded of the old story about the blind men and the elephant. Each is touching a different part, but they are all studying the same underlying phenomenon.

Algorithm Research Challenges and Methods.317

 ###

John Danaher is an academic with interests in the philosophy of technology, religion, ethics and law. John holds a PhD specialising in the philosophy of criminal law (specifically, criminal responsibility and game theory). He formerly was a lecturer in law at Keele University, interested in technology, ethics, philosophy and law. He is currently a lecturer at the National University of Ireland, Galway (starting July 2014).

He blogs at http://philosophicaldisquisitions.blogspot.com and can be found here: https://plus.google.com/112656369144630104923/posts

This article originally appeared on John’s site here. Republished under creative commons license.

 

The post How to Study Algorithms: Challenges and Methods appeared first on h+ Media.

]]>
http://hplusmagazine.com/2015/07/28/how-to-study-algorithms-challenges-and-methods/feed/ 0
Disruptive — Bioinspired Robots (Audio, 3 Parts) http://hplusmagazine.com/2015/07/28/disruptive-bioinspired-robots-audio-3-parts/ http://hplusmagazine.com/2015/07/28/disruptive-bioinspired-robots-audio-3-parts/#comments Tue, 28 Jul 2015 19:05:30 +0000 http://hplusmagazine.com/?p=28397 Our bodies — and all living systems — accomplish tasks far more complex and dynamic than anything yet designed by humans.

The post Disruptive — Bioinspired Robots (Audio, 3 Parts) appeared first on h+ Media.

]]>

Our bodies — and all living systems — accomplish tasks far more complex and dynamic than anything yet designed by humans. Many of the most advanced robots in use today are still far less sophisticated than ants that “self-organize” to build an ant hill, or termites that work together to build impressive, massive mounds in Africa. From insects in your backyard, to creatures in the sea, to what we see in the mirror, engineers and scientists at the Wyss Institute are drawing inspiration from nature to design whole new classes of smart swarm, soft, wearable and popup robotic devices. In this three part episode, Wyss Institute Core Faculty Members Radhika Nagpal, Robert Wood and Conor Walsh discuss the high-impact benefits of their bioinspired robotic work, as well as what drove them to this cutting-edge field.

In the first Bioinspired Robotics episode, Wyss Founding Core Faculty Member Radhika Nagpal discusses swarm collectives, as well as the challenges faced by women in the engineering and computer science fields.

In part 2 of the Bioinspired Robotics episode, Wyss Founding Core Faculty Member Robert Wood discusses new manufacturing techniques that are enabling popup and soft robots.

In part 3 of the Bioinspired Robotics episode, Wyss Core Faculty Member Conor Walsh discusses how a wearable robotic exosuit or soft robotic glove could assist people with mobility impairments, as well as how the goal to create real-world applications drives his research approach.

 

 

The post Disruptive — Bioinspired Robots (Audio, 3 Parts) appeared first on h+ Media.

]]>
http://hplusmagazine.com/2015/07/28/disruptive-bioinspired-robots-audio-3-parts/feed/ 0
Funding Policies Distort Science http://hplusmagazine.com/2015/07/27/funding-policies-distort-science/ http://hplusmagazine.com/2015/07/27/funding-policies-distort-science/#comments Mon, 27 Jul 2015 18:17:09 +0000 http://hplusmagazine.com/?p=28393 Capital shuns risk . —— The essence of science is exploration of the unknown.

The post Funding Policies Distort Science appeared first on h+ Media.

]]>

money-down-drain-tunnel-funnel

Capital shuns risk .  —— The essence of science is exploration of the unknown.

Science and Capitalism is not exactly a match made in heaven.  Government and foundation funding has always been behind the curve of innovation, but the recent contraction in US science funding has engendered an unprecedented intensity of competition.  This has translated into a disastrous attitude of risk aversion.  A “hard-headed” business model prevails at the funding agencies, and they are now funding only those projects that they deem “most likely to succeed.”

The difference between science and engineering is that scientific research starts without understanding and tries out various hypotheses until one seems to work; while an engineer works with a paradigm that she knows to be reliable enough to be a basis for results of her innovations in advance.

A high failure rate is inseparable from good science.  But NSF prefers to fund low-risk work, which is really engineering.  

————

One irony is that capitalism is pretty good at allocating funds for engineering.  Once the science is well developed, the marketplace isn’t a bad model for deciding where to invest engineering resources.  We probably don’t need NSF to fund the “D” half of “R&D”.  But the reason that we need NSF (and NIH and NIA) as public funding institutions is that the rewards of science are difficult to predict.  I venture to propose Mitteldorf’s Law of Experimentation:

The more unpredictable the result, the more important the experiment.

what_no_money__by_flypiityflopt-d5egz1n

Perverse Incentives and the Law of Unintended Consequences

Government funding pays a minor portion of each contract for salaries, equipment, and operating expenses.  The major portion is called “overhead”, and it goes back to the university (or other institution, some of them for-profit) that houses the research.  The proportion allocated to overhead “ranges from 20% to 85% at universities, and has an even wider spread at hospitals and non-profit research institutes.” [from a 2014 article in Nature; also, background and recent news here]

At a prominent midwestern medical school where I was visiting last week, my host had just received word of renewed funding for his research, breathed a sigh of relief.  For the first time in many months, he was able to put grant-writing in the background and devote his attention to the substance of his research.

There were cranes and bulldozers and signs of new construction everywhere.  It looks like healthy expansion in the health sciences.  But the reality beneath the surface is that the existing buildings were less than 70% occupied.  Why was the university building more space when they couldn’t fill the space they had?  Because the “overhead multiplier” is negotiated with NIH for the university as a whole, and has enormous consequences, dwarfing the cost of any one building.  This university had an overhead rate of 53% and the new construction was part of a push to justify raising it to 54%.  If they succeed, the University will be a little richer, and each of their research scientists will be a little poorer.

 

Pharmaceutical Companies are the Worst Case

rp_pills.jpgPrivate companies are motivated to research the drugs that can be sold most profitably, not the ones that can provide the most good to the public at the least cost.  So there are orphaned drugs and there are untested nutraceuticals that are unpatentable and therefore unprofitable, but may be safer and more effective than the patented drugs—how will we know?  This represents a huge distortion of spending priorities, a sacrifice of health to profit.

There is a well-documented tendency of pharmaceutical companies to research small tweaks to their competitors’ successful drugs, rather than strike out in new areas with new ideas.  The former has a lower risk, and if the program is successful, the company can take over an entire profitable market from a competitor, with a drug that is only marginally better.  And even if the new drug is not even marginally better, frequently the companyfinds a way

The system that we have provides that pharmaceutical companies are responsible for testing their patented products for safety and efficacy.  This is an invitation to corruption.  In Phase III trials, a company has already invested so much in their product that if the trial results are negative then the company is on the hook for hundreds of millions of dollars.  It is too much to ask scientists to be objective under these conditions.  How can they make unbiased judgments about the message of their data, let alone design experiments and tests and criteria, when their funding and their boss’s funding depends on a favorable result.  Does anyone believe that scientific data reported under such circumstances can be reliable?  Among the horror stories of fraud and suppressed data in the pharmaceutical industry, antidepressants top the list because criteria are subjective and markets are huge.  In addition to antidepressants, many of the drugs on this list are psychiatric drugs that have been promoted “off-label” for depression because this expands their potential market.

Pain medications are sold in a shadow street market.  Arthritis drugs have been promoted despite the dangers they pose for cardiovascular damage.

Abuse of antibiotics and the unfolding global crisis of antibiotic-resistant bacteria is too big a topic even to summarize.

The right way to fund pharmaceutical research is through university grants to target high-priority specific diseases, including aging.  All patents accruing from this work should be placed in the public domain, and pharmaceutical companies can compete at what they do best, which is devising inexpensive ways to manufacture and distribute known chemicals.

 

Positive directions

The best prospects for future scientific breakthroughs lie in the direction of things that we already  know but don’t understand–things that don’t make sense.  Most of these will turn out to be mistakes in experimental technique or interpretation; but there are some that have such broad corroboration from diverse laboratories that this is unlikely.  I have a personal passion for collecting stories of scientific results that defy theory, and a portion of my research and reading time is always devoted to looking for neglected or fringe science that just might lead someplace new and interesting.

Within the field of aging research, readers of these pages already know that my dark horse favorites are telomerase, decoding the language of epigenetic programming, identifying the relevant blood factors from parabiosis experiments, and replication of promisingRussian experiments with epithalon and other short peptides.  Here are a few topics that have piqued my interest from further afield in biology.

  • Cell phones and cancer.  I don’t know whether the risk from RF radiation is small or large, but I do know that it ought to be zero from everything we know about biology and physics.  Interactions between RF radiation and biological systems took the entire scientific community by surprise, and whatever the mechanism turns out to be, it is likely to open doors into new fields of research.
  • Animal navigation.  From salmon to monarchs, from whales to homing pigeons, the means by which animals know where they are and where they want to be are just beginning to be elucidated.  Some are amazingly reliable.  Surprising uses of quantum physics by plants and animals have already been a fruit of this research.
  • Perhaps related is (presumed) epigenetic inheritance of acquired cognitive information.  Knowledge (as far as we know) is coded in synapses in the brain. How can it be transmitted in DNA?  The case of monarch butterflies “remembering” the tree 2000 miles away where their great great great great grandmother overwintered is a well-known example.  Less known is this article on metamorphosis and learning from PLoS One.
  • Anomalous cures.  For every “incurable” disease, there is some small percentage of people who manage to cure themselves.  These cases are ignored by most medical scientists because they don’t fit the model of statistical evidence and “one disease ⇒ one cure” that predominates in the community.  But perhaps we can learn some basic biology from studying them.
  • Lamarckian inheritance.  Darwin believed that the individual traits of your offspring depend on your activities as well as your genes.  “Use and disuse” was his term.  But for 100 years since August Weismann, bedrock evolutionary science tells us that the genes you inherit are the genes you pass on, with only purely random mutations.  In recent decades, there are exceptions to this law.  One is epigenetic inheritance, through which your life experience can affect your children and grandchildren and perhaps great grandchildren through their inherited gene expression.  The other is what James Shapiro calls natural genetic engineering.  He has documented the ability of bacteria to alter their genes in response to stress, and in a way that responds explicitly to the kind of stress that is experienced.  Is anyone looking to see if higher organisms can do this, too?

 

I could go further…

I could say that “professional scientist” is already a oxymoron.  Scientists work best when they are driven by curiosity and a passion to find out, when they are doing what they love.  How can that be consistent with centralized decision-making and bureaucratic control of research priorities?  If we pay a scientist to do science, we should not make the payment contingent on studying anything in particular.

No one in a government bureaucracy has the wisdom to predict next year’s breakthroughs, or to single out the scientists most likely to achieve them.

Since 1996, I have pursued the science of aging without funding or support or a university appointment.  (Every year or two, I ask a colleague to arrange for an unpaid “courtesy appointment” so that I can have a university affiliation behind my name when I submit papers for peer review.)  Some of my closest friends are at universities, with large research staffs and successful careers.  I envy their daily contact with colleagues, access to seminars, and (aboe all) the opportunity to mentor and supervise the next generation of researchers.  They tell me I am lucky to avoid grantwriting, faculty meetings and academic politics.  Most of my academic friends and colleagues have paid for their success with their health in one way or another.  I am privileged to manage my time so as to make self-care a priority–nutrition, exercise, meditation, and sleep.

In the late 1970s, when I was a low-level researcher at a government contract research house on Route 128, we always worked one year ahead of our funding.  By the time a proposal was written, we had worked out the science in sufficient detail that we knew the results.  If the proposal was funded, we would use the proceeds to support us while we worked on next year’s proposal.

We may be outraged at 70% overhead rates for administration, and think of this as “slush money” that is ripe for abuse.  I agree that bureaucrats receive too big a share of the pie, and scientists too little.  But there is some portion of the overhead money that finds its way back through departments to the researchers themselves, and offers them some slack between contracts, their only real freedom to think and to innovate.

I asked my collaborator at Prominent Midwestern U whether he had funding for the exploratory, groundbreaking work on population dynamics that he was doing with me, but I already knew the answer.  He was doing it with soft funding for a follow-on to previously successful research.  He had prudently kept the funders in the dark about this specific project.  There’s plenty of time to tell them about it if we succeed.

###

This article originally appeared here in Josh’s blog Aging Matters. Republished with permisison.

The post Funding Policies Distort Science appeared first on h+ Media.

]]>
http://hplusmagazine.com/2015/07/27/funding-policies-distort-science/feed/ 0
Be Nice http://hplusmagazine.com/2015/07/27/be-nice/ http://hplusmagazine.com/2015/07/27/be-nice/#comments Mon, 27 Jul 2015 18:06:41 +0000 http://hplusmagazine.com/?p=28389 What’s the harm in just being rude and uncivil?

The post Be Nice appeared first on h+ Media.

]]>
You should really be nicer to your colleagues – rude behavior is contagious

We experience rudeness and incivility all the time. From simple insults and offhand remarks to purposely excluding others from groups, these behaviors are largely tolerated in our daily lives and in the workplace. The question is, what effect do these behaviors have on us?

It’s pretty clear that high-intensity negative behaviors like abuse, aggression and violence are harmful. But what’s the harm in just being rude and uncivil?

A growing body of research offers compelling evidence that experiencing rudeness, and even simply witnessing rudeness, can have surprisingly harmful effects on performance, creativity and even helpfulness. However, it might not even end there.

What if rudeness was actually contagious? This would mean that rudeness may not only hurt those who experience or witness it, but also have secondary effects. People who’ve experienced rude behavior from others are now “infected” with rudeness themselves, and will be rude to the people they interact with next.

Office rudeness is contagious, just like the common cold

To explore this phenomenon, my colleagues and I at the University of Florida (Andrew Woolum and Amir Erez) conducted a study to find out if rudeness was contagious from one person to another.

Over the course of a seven-week period, the participants (students engaged in a negotiations course) engaged in 11 negotiations exercises with various partners.

After each negotiation, participants had the opportunity to rate how rudely their negotiation partner had behaved. The structure of this exercise allowed us to explore how rudeness could be contagious by examining how the rudeness experienced in one negotiation influenced rude behaviors in the next negotiation. We didn’t instruct participants to be rude; we simply measured the normal rudeness that was present in the negotiation setting.

We found that rudeness is in fact contagious. If negotiators felt that their negotiation partner was rude, when they went on to their next negotiation, their new partner in turn perceived them as rude.

Another surprising finding was how long this effect lasted. Some of the negotiations took place one after another, and some took place up to seven days apart. We found that the time between negotiations didn’t seem to matter. Even if negotiations were a week apart, the rudeness experienced in the previous negotiation still caused participants to be rude in their next negotiation.

Is it catching?
Office workers via www.shutterstock.com.

Why does rudeness spread from one person to another?

Prior research has shown that both emotions and behaviors can be socially contagious.

For example, when people around you are feeling happy, it is likely that you will start to feel happy too. Similarly, when people around you tap their toes or fold their arms, often you will start doing the same thing. Since these effects are usually described as simple subconscious mimicry, they probably can’t describe why rudeness can make us more rude. So how does it happen?

To tackle this question, we explored whether a process occurring in a subconscious part of the brain was responsible. When we experience social stimuli (like a conversation with a coworker), they can activate concepts deep in the subconscious part of our brains.

A concept could be anything. We have a concept for anger, happiness, sadness, power, and, of course, rudeness. The activation of concepts is automatic – meaning when it happens, we aren’t aware of it. And when concepts are activated, this changes the way we perceive the world a little bit.

Happy concept activated.
Smiley face via www.shutterstock.com

For example, just seeing a happy face could activate the happiness concept, causing us to perceive future stimuli as more happy. Furthermore, researchers have found that when people write a short vignette about power, that can activate the power concept, causing people to feel more powerful.

So if that rude concept is activated, it causes us to perceive stimuli as a little bit more rude. And that’s what we found in two experimental studies. When people experienced (or even witnessed) rudeness, they noticed rudeness in their environment more, making them more likely to perceive things as rude, and this perception of rudeness caused them to respond with rudeness.

For example, imagine someone walking by you and saying “Hey, nice shoes!” You might interpret that as a compliment, or you might interpret it as an insult – it’s sort of hard to tell, and your brain has to decide. Well, when you’ve recently experienced rudeness, you are more likely to perceive that comment as rude even if it wasn’t meant that way. Then, subsequently, you will respond to the perceived rudeness with more rudeness.

What is so scary about this effect is that it’s an automatic process – it takes place in a part of your brain that you are not aware of, can’t stop, and can’t control. So, you would not necessarily be aware that the reason you (mis)interpreted the “nice shoes” comment is that you had recently experienced rudeness. This means you can’t temper the process.

Just don’t be rude

This evidence that rudeness is contagious really underscores how harmful these behaviors can be, particularly in organizational settings.

While prior evidence showed that rudeness could be harmful to performance, creativity and helpfulness, this research shows that the effects are not limited to the parties of the rude interaction.

In this way, rudeness can spread out like a virus, not only harming the performance of those who experience it but also making them carriers likely to pass the harm on to those with whom they interact next.

This means that maybe we need to rethink what behaviors are acceptable in the workplace. Behaviors like aggression, abuse, and violence are not tolerated at work, but sometimes rudeness tacitly is – but maybe it shouldn’t be. Up to 98% of workers report that they have experienced rudeness in the office, and 50% say they experience it weekly. So just be nice.

###The Conversation

Trevor Foulk is Doctoral Student at University of Florida.

This article was originally published on The Conversation.
Read the original article.

The post Be Nice appeared first on h+ Media.

]]>
http://hplusmagazine.com/2015/07/27/be-nice/feed/ 0
Cellular Lasers http://hplusmagazine.com/2015/07/27/cellular-lasers/ http://hplusmagazine.com/2015/07/27/cellular-lasers/#comments Mon, 27 Jul 2015 17:38:20 +0000 http://hplusmagazine.com/?p=28385 Researchers at Harvard Medical School create a miniature laser that can emit light inside a single live cell.

The post Cellular Lasers appeared first on h+ Media.

]]>

In the last few decades, lasers have become an important part of our lives, with applications ranging from laser pointers and CD players to medical and research uses. Lasers typically have a very well-defined direction of propagation and very narrow and well-defined emission color. We usually imagine a laser as an electrical device we can hold in our hands or as a big box in the middle of a research laboratory.

Fluorescent dyes have also become commonplace, routinely used in research and diagnostics to identify specific cell and tissue types. Illuminating a fluorescent dye makes it emit light with a distinctive color. The color and intensity are used as a measure, for example, of concentrations of various chemical substances such as DNA and proteins, or to tag cells. The intrinsic disadvantage of fluorescent dyes is that only a few tens of different colors can be distinguished.

In a combination of the two technologies, researchers know that if a dye is placed in an optical cavity – a device that confines light, such as two mirrors, for example – they can create a laser.

Taking it all a step even further, our research, described in the journal Nature Photonics, shows we can create a miniature laser that can emit light inside a single live cell.

Tiny, tiny lasers

Green laser bead in a cell.
Matjaž Humar and Seok Hyun Yun, CC BY-ND

We made our lasers out of solid polystyrene beads ten times smaller than the diameter of a human hair. The beads contain a fluorescent dye and the surface of the bead confines light, creating an optical cavity. We fed these laser beads to live cells in culture, which eat the lasers within a few hours. After that, we can operate the lasers by illuminating them with external light without any harm to the cells.

Then we capture the light emitted from the cells via a spectrometer and analyze the spectrum. The lasers can act as very sensitive sensors, enabling us to better understand cellular processes. For example, we measured the change in the refractive index – the way light travels through the cell – while varying the concentration of salt in the medium surrounding the cells. The refractive index is directly related to the concentration of chemical constituents within the cells, such as DNA, proteins and lipids.

Further, lasers can be used for cell tagging. Each laser within a cell emits light with a slightly different fingerprint that can be easily detected and used as a bar code to tag the cell. Since a laser has a very narrow spectral emission, a huge number of unique bar codes can be produced, something that was impossible before.

With careful laser design, up to a trillion cells (1,000,000,000,000) could be uniquely tagged. That’s comparable to the total number of cells in the human body. So in principle, it could be possible to individually tag and track every single cell in the human body. This is a huge leap from cell-tagging methods demonstrated until now, which can tag at most a few hundred cells. So far we’ve tagged cells only in Petri dishes, but there’s no reason it shouldn’t also work for cells within a living body.

Green cells with their blue nuclei were injected with red oil droplets that act as deformable lasers.
Matjaž Humar and Seok Hyun Yun, CC BY-ND

Alternative materials for cellular lasers

Instead of a solid bead, we also used an droplet of oil as a laser inside cells. Using a micro pipette, we injected a tiny drop of oil containing fluorescent dyes into a cell. In contrast to the solid bead, forces acting inside the cells can deform the droplets. By analyzing the light emitted by a droplet laser, we can measure that deformation and calculate the force acting on the droplet. It’s a way to get a very precise picture of the kinds of mechanical forces exerted within cells by processes such as cellular migration and division.

Yellow lipid cells within subcutaneous fat tissue, which can be used as natural lasers.
Matjaž Humar and Seok Hyun Yun, CC BY-ND

Finally, we realized that fat cells already contain lipid droplets that can work as natural lasers. They don’t need to eat or be injected with lasers, just supplied with a nontoxic fluorescent dye. That means each of us already has millions of lasers inside our fat tissue that are just waiting to be activated to produce laser light. Next time you’re thinking about trimming down, you could just reconceptualize your body fat as a huge number of tiny lasers.

Inserting an optical fibre into a piece of pig’s skin to excite and extract the laser light generated by subcutaneous fat cells.
Matjaž Humar and Seok Hyun Yun, CC BY-ND

Our new cell laser technology will help us understand cellular processes and improve medical diagnosis and therapies. They could eventually provide remote sensing inside the human body without the need for sample collection. A cell is a smart machine, equipped with a computer with “DNA Inside.” Specialized cells, such as immune cells, can find the disease and site of inflammation, carrying the laser to the target for laser-based diagnosis and therapies. Imagine rather than a biopsy for a lump that doctors suspect to be cancer, cell lasers helping determine what its made of. Cell lasers also hold promise as a way of deliver laser for therapies, for example, to activate a photosensitive drug at the target to kill microbes or cancerous cells.

The Conversation###

Matjaž Humar is Research Fellow in Dermatology at Harvard Medical School .
Seok-Hyun Yun is Associate Professor of Dermatology at Harvard Medical School .

This article was originally published on The Conversation.
Read the original article.

The post Cellular Lasers appeared first on h+ Media.

]]>
http://hplusmagazine.com/2015/07/27/cellular-lasers/feed/ 0
Video Friday: Beyond Parabiosis: Defined Approaches to Tissue Rejuvenation presented by Dr. Irina Conboy http://hplusmagazine.com/2015/07/24/video-friday-beyond-parabiosis-defined-approaches-to-tissue-rejuvenation-presented-by-dr-irina-conboy/ http://hplusmagazine.com/2015/07/24/video-friday-beyond-parabiosis-defined-approaches-to-tissue-rejuvenation-presented-by-dr-irina-conboy/#comments Fri, 24 Jul 2015 17:44:50 +0000 http://hplusmagazine.com/?p=28381 Irina Conboy, PhD, Associate Professor of Bioengineering, University of California, Berkeley, CA,

The post Video Friday: Beyond Parabiosis: Defined Approaches to Tissue Rejuvenation presented by Dr. Irina Conboy appeared first on h+ Media.

]]>

Irina Conboy, PhD, Associate Professor of Bioengineering, University of California, Berkeley, CA,

Barshop Institute Seminar Series for January 7, 2014

Thanks to Steve Hill

The post Video Friday: Beyond Parabiosis: Defined Approaches to Tissue Rejuvenation presented by Dr. Irina Conboy appeared first on h+ Media.

]]>
http://hplusmagazine.com/2015/07/24/video-friday-beyond-parabiosis-defined-approaches-to-tissue-rejuvenation-presented-by-dr-irina-conboy/feed/ 0
Antibiotic Resistance Can Make Bacteria Stronger http://hplusmagazine.com/2015/07/24/antibiotic-resistance-can-make-bacteria-stronger/ http://hplusmagazine.com/2015/07/24/antibiotic-resistance-can-make-bacteria-stronger/#comments Fri, 24 Jul 2015 16:23:32 +0000 http://hplusmagazine.com/?p=28377 Unfortunately, disease-causing bacteria can become resistant to antibiotics that are meant to kill them.

The post Antibiotic Resistance Can Make Bacteria Stronger appeared first on h+ Media.

]]>
Antibiotics are wonderful drugs for treating bacterial infections. Unfortunately, disease-causing bacteria can become resistant to antibiotics that are meant to kill them. This is called selective pressure – the bacteria that are susceptible to the drug are killed, but the ones that withstand the antibiotic survive and proliferate. This process results in the emergence of antibiotic-resistant strains.

Once a bacterial strain is resistant to several different antibiotics, it has become a multi-drug-resistant (MDR) microbe. When there are virtually no antibiotics available to treat an infected patient, a microbe is said to be “pan-resistant.“ These strains are becoming more and more common in hospitals and in the community at large. You might have heard of some of them: for instance, methicillin-resistant Staphylococcus aureus (MRSA), vancomycin-resistant Enterococci (VRE) and carbapenem-resistant Enterobacteriaceae (CRE).

Bacteria can become drug-resistant in two ways – resistance can be natural, meaning that the genes conferring resistance are already present in the bacterial chromosome, or they can be acquired through mutation or by picking up antibiotic-resistance genes from other microbes.

It is now possible to use new DNA-sequencing technologies to take a closer look at how the antibiotic resistance can make some bacteria weaker or stronger. And in a new study, we found that – contrary to conventional wisdom around antibiotics – resistance can actually make some bacteria fitter and even more virulent.

MRSA (Methicillin-resistant Staphylococcus aureus) bacteria strain is seen in a Petri dish containing agar jelly for bacterial culture in a microbiological laboratory in Berlin.
Fabrizio Bensch/Reuters

Is fitness always a cost of antibiotic resistance?

For decades, an established dogma in the field of infectious diseases has been the so-called “fitness cost of antibiotic resistance.“ We believed there was a trade-off for bacteria between antibiotic resistance and how well they could carry out their regular tasks of living.

The idea is that while antibiotic-resistant strains cause infections that are more difficult to treat, they are also less hardy. Either they are less able to survive within an infected host and/or they’re less virulent, causing less severe infection, with a reduced ability to be passed along to another human.

And we know that this picture is true for some bacteria. Both Mycobacterium tuberculosis (which causes tuberculosis) and Mycobacterium leprae (which causes leprosy) can become resistant to the drug rifampicin, which is one of the main antibiotics used to treat these diseases.

For M. tuberculosis and M. leprae, resistance to rifampicin comes thanks to a mutation in one gene. The mutation buys the bacteria the ability to fend off antibiotics, but it interferes with their normal cell physiology and the factors that make them virulent. As we’d expect, resistance comes with a clear fitness cost in this case.

But what if resistance actually makes some bacteria stronger and deadlier? Our team used DNA sequencing techniques to tease apart the relationship between antibiotic resistance and fitness cost in infections in laboratory animals. It turns out that for some bacteria, drug resistance actually makes them fitter.

Using ‘jumping genes’ to compare resistance and fitness

We analyzed a bacterium called Pseudomonas aeruginosa. It’s a major cause of infections in people with cystic fibrosis, as well as very ill patients in intensive care units (ICU) and people with weakened immune systems.

P. aeruginosa is naturally resistant to several antibiotics and can acquire resistance to numerous others to become multi-drug-resistant or even pan-resistant.

To find out if there was a fitness cost from resistance, we created mutant strains of P. aeruginosa using “jumping genes” to insert mutations into the bacteria. Because we wanted to see what the cost of resistance was, we made two kinds of mutant strains. Some mutant strains lost their natural-resistance genes, while other mutant strains acquired resistance due to inactivation of genes that made them susceptible to antibiotics.

This meant that we could use DNA sequencing to determine how loss of each mutated gene affected the overall ability of P. aeruginosa to cause an infection in mice and the bacterium’s overall fitness.

Antibiotic resistance doesn’t always come at a cost

With an organism like P. aeruginosa, physicians often turn to a class of antibiotics called carbapenems to treat infections. Carbapenems kill P. aeruginosa through a channel or pore in the bacteria’s outer wall made by the protein OprD. That pore lets carbapenems in, which kills the cell. In more than 70% of human infections with carbapenem-resistant strains of P. aeruginosa, the bacterium has stopped making the OprD pore – meaning the killer antibiotic now cannot get inside the cell. We created mutant strains of P. aeruginosa that could not produce the OprD protein, giving them an acquired resistance to carbapenems.

In our experiments, it turns out the fitness is not a trade-off for resistance in P. aeruginosa. We found that the most fit mutants were those that had become carbapenem-resistant because the OprD protein was no longer made.

40% of strains recovered from the mice’s GI tracts were OprD mutants.
Mouse via www.shutterstock.com

In mice with P. aeruginosa infections in their gastrointestinal tracts, the OprD mutants initially represented less than 0.1% of the strains used to establish infections. But after five days, the OprD mutants comprised more than 40% of the strains we recovered from the mice’s GI tracts. The “mutant” bacteria didn’t just spread because they were hard to kill (we did not give any antibiotics to the mice) but because they were fitter than the other bacterial strains infecting the mice.

We saw something similar when we used the mutant strains to give the mice bacterial pneumonia. The OprD mutants once again emerged as the predominant strains, but many of them were also resistant to another common antibiotic called fosfomycin. Like carbapenem resistance, fosfomycin resistance is also due to a single gene.

Overall, when bacteria acquired resistance to fosfomycin and carbapenem antibiotics, they became fitter and more virulent. This counters the more commonly accepted concept that there is a fitness cost due to antibiotic resistance.

In fact, we found that the mutant strains that lost their natural antibiotic resistance became less fit. So acquiring resistance made the bacterial cells stronger, while losing resistance made them weaker.

What about other kinds of bacteria?

To see if this effect was limited to P. aerginoa, we decided to look at two other bacterial species to see if antibiotic resistance made them fitter as well.

We looked at another multidrug and even pan-drug antibiotic resistance organism called Acinetobacter baumannii, which causes many types of severe infections in the lungs, blood and skin, and a non-drug-resistant bacterium, Vibrio cholerae, which causes cholera. V. cholera also has some natural antibiotic resistance genes.

Along with coauthors Drs John Mekalanos and Stephen Lory at Harvard Medical School, we found that for A. baumannii and V. cholerae, the loss of antibiotic resistance was associated with loss of fitness and a weakened ability to cause infection.

But, when the bacteria acquired antibiotic resistance through a genetic mutation, they became more virulent, and had a stronger ability to cause infections in preclinical laboratory models of infections.

Ensuring antibiotics are used properly isn’t enough. Handwashing to control the spread of bacteria is important.
Handwashing via www.shutterstock.com

What does this mean for strategies to combat antibiotic resistant bacteria?

We do not expect these findings to be true for every kind of bacteria. But even if they apply to just some organisms, it means that resistant strains will not go away if we simply reduce or control antibiotic use.

There is a general belief that if antibiotics are used only when needed, the antibiotic-susceptible strains will outcompete the less fit – but resistant – strains. But this strategy might not be enough to combat bacteria that get stronger when they become drug-resistant instead of weaker.

Handwashing and related measures can control the spread of resistant bacteria. But we also need vaccines and premade antibodies that can be given to people who are at risk for, or actually infected with, drug-resistant microbes.

That is something our research team from Harvard Medical School and Brigham and Women’s Hospital is pursuing. We are investigating the development of a potentially very broad-spectrum vaccine along with another product, a human antibody, that could provide immunity to most drug-resistant bacteria, including tuberculosis and the feared MRSA strains, and perhaps even organisms causing diseases such as malaria.

The Conversation

Gerald Pier is Professor of Medicine (Microbiology and Immunobiology) at Harvard Medical School .
David Skurnik is Assistant Professor of Medicine, Division of Infectious Dieases, Brigham and Women’s Hospital at Harvard Medical School .

This article was originally published on The Conversation.
Read the original article.

The post Antibiotic Resistance Can Make Bacteria Stronger appeared first on h+ Media.

]]>
http://hplusmagazine.com/2015/07/24/antibiotic-resistance-can-make-bacteria-stronger/feed/ 0
Science Fiction Realism http://hplusmagazine.com/2015/07/24/science-fiction-realism/ http://hplusmagazine.com/2015/07/24/science-fiction-realism/#comments Fri, 24 Jul 2015 16:16:15 +0000 http://hplusmagazine.com/?p=28372 Structures. Something has been built, grown, stretched. Maybe skin, maybe a web, maybe a protective barrier – it is a plastic protein emitted by an organism in order to increase its survival opportunities, it is a food matrix for its offspring which thrive on glossy resin. You can travel across it and it can easily be mapped, although not by humans.

The post Science Fiction Realism appeared first on h+ Media.

]]>

Structures. Something has been built, grown, stretched. Maybe skin, maybe a web, maybe a protective barrier – it is a plastic protein emitted by an organism in order to increase its survival opportunities, it is a food matrix for its offspring which thrive on glossy resin. You can travel across it and it can easily be mapped, although not by humans.

We can’t say anything about it – we can speculate everything about it. It is something possible or as the author says another reality. The real is replaced by the potential. This is one of a series of works by St. Petersburg-based artist Elena Romenkova. The works are glitches, abstract distortions, alien expressions of what for her is a subconscious realm.

A portal. You are entering the rainbow world contained within two concentric eggs within the grey world. This is light, reflections, haze, indescription. It looks inviting. The colour spectrum is odd, the whites creep up on everything else, the shape of everything is strange. Basic synaesthetic rules are inapplicable at the rainbow/grey world junction.

There is nothing that this image, by French artist Francoise Apter (Ellectra Radikal), has in common with Romenkova’s. They are united only by their adherence to strangeness, a technically created vista that looks like nothing we know. A world not of local cultures, but of computational production.  Here anyone can know anything, it doesn’t matter where you’re from.

What is culture when locality is secondary to epistemology? What is knowledge when the portable device takes precedent over your situated environment? Worlds are built around us, sophisticated electrical spaces, they travel where we travel, and only after do we factor in the idiosyncracies of specific geography. If the banal experience is one of nomadic alienation, of search methods based on no place, what does the role of culture and art become? Everyday life is a subject for hypothetical language. The digital commons is a species of posthuman that communicates via speculative misunderstanding.

Korean artist Minhyun Cho (mentalcrusher) shows us what the dinosaurs really looked like. When you put the meat and scales back on. He shows us what an ice building being looks like in the shadow of terminal cartoon winter. How rubber can be used to erect sculptures and bones can be taken out of museums and put to good use in civic architecture. No one is around to see this, but still the idea sets a precedent. Crown each ghost with ice mountain prisms.

With visual language, very quickly we get to a stranger and more indeterminate range of science fiction possibilities than narrative tends to map out for us. How much imagination is possible, and how much does our internal experience match anything presented around us. If our environments advance exponentially quicker than any generational or traditional mythology, what sort of language can we have for expression? The maker’s invention precedes the reception of form.  Innovation is a matter of banal activity, communicating an experience of the real which is never the same.

And now an eyeball. Triangles. A vessel. To Cho’s blinding world of light, Spanish artist Leticia Sampedro responds with a featureless darkness. All absurdities once on display, now they recede into nothing. It might be a mandala, perhaps an artifact from the ancient future, a portable panopticon that fits conveniently on your desktop. Your feelings are here, your peculiar distances, everything’s reflecting off the glass, the metal, the camera. You are the mirrored fragments of an invention we’ve lost the blueprints to.  Foresight the womb of a disembodied politics of community.


In Tokyo, Architect and Artist Ryota Matsumoto creates “Static Keyframes”. This might be what data actually looks like, as the software renders a placeholder for a video instance, as binary electrical distinctions map a mathematical moment, somewhere there are bodies that conform to these new states. A pastel palette turned metallic, a complexifying of Romenkova’s stranded structure into knots of spectral confusion. Somewhere in here we have a blueprint for moving image memory. For a kinetographic polis of quasihuman protest. Every distorted structure decries the loss of recognition.

In German artist Silke Kuhar‘s (ZIL) work, we enter into one of these convolutions. Inside we find hallways, a nice selection of windows and all kinds of data – scripted, graphed, symbolized. This is the plan for the future. I hope you can read what it says. Her work meshes spaces with collapsing foreign constructs – if we can just read the language we’ll know what to do. But no one reads it, and no one wrote it. This is a building without inhabitants – architecture without people. Democratic ballots are automatically filled out by a predetermined algorithm.  Your agency is a speculative proposition for popular media – people collaborate with you, but they can’t be sure where you are, when you wrote, and if you really exist as such.

No people. This is a unifying principle. Cold, silver, streams. Machines in the sky. Silicon waterfalls, diagonal. Civilization distilled into physical patterns, an obtuse object photographed in another dimension. What is the word for reality again. What is the word for scientific investigation? A Venezuelan based in Paris, Maggy Almao’s abstract glitch world is silent – it’s a gradient, it’s some illusion of partial perspective.

What is the language to talk about the world? If we turn to artists’ visualizations, what does that tell us about languages we speak, and ones we read? What does the graphing of incomprehensible mechanisms tell us in turn about art and its history? The machine’s narratives tend to drown out any functional reality. Genre storytelling tropes become repurposed as collective cultural ideas.  Conceptual works are followed by pragmatic speculation, medium-centric analysis replaced by experimental failures. You can never get a fictional experiment to work.

Science has indelibly entered the art field, for each of its medial innovations it requires further attention in terms of its technical makeup. Half the work is figuring out what the canvas even is, we are building canvases, none of them look alike, and their stories read like data manuals. An aesthetics of unknown information.

This is the homeland. The homeland is mobile and has many purple bubbles. It’s an airship from the blob version of the Final Fantasy series. It has satellite TV to keep in touch with the world. It has some tall buildings so you know it’s civilized. It is part of Giselle Zatonyl, an Argentine-born Brooklyn-based artist’s opus which deals comprehensively with science fiction ideas and their implications.

The ship travels, where the culture originates is more and more unknown. It is technically divided, access is the key, we can worry about language and culture later. We are still embodied, still located somewhere, but all this has become subject to the trampling of scientific mythologies, where their utilities might go, and where their toys are most needed. Crisis is a genre now, about as popular as time travel. You are now free to dream up whatever future society you wish, and subjugate whatever cyborg proletariat your heart desires. In the realm of speculation, anything is possible, and nothing is fully acceptable.

The themes of internet art production give us some language, some set of visions that tell certain stories – works found throughout the internet, posted in communities, shared online – sometimes part of gallery exhibitions or products, sometimes not. You get a profile, some social media pages, build a website, you begin making, sharing and remixing images. Folk art is a subsidiary of new media art – social sculpture meets internet content management systems. A language for political engagement based on the creative activity of speculation. Scientific dreams for a technological commons.

Dreams where sight is physicalized into complex data graphs. Where Sampedro’s portable gelatin panopticon is cloned into a regularized matrix. Inspired vision is just one aspect of algorithmic predictability. In Taiwanese artist Lidia Pluchinotta‘s visual work, the cloned image is central.  Mechanical reproduction, skulls, spirals, symbols, the internet has it all.  Civic participation has never been so mathematical, observation never so multiple.


Inside the city, architecture is actually a colour-coded map that helps you find the store you’re looking for. The map is the territory except there’s no info on how to read it. We are here, we are home, but the walls of the buildings were designed by some specialist that we haven’t met yet.  Stairs, depths, the complex and layered constructions in Canadian artist Carrie Gates‘ work aren’t quite one of Zatonyl’s buildings. More fragmented, more saturated, more chaotic. It’s speculated that people could live here, although we don’t see them anywhere. Not yet anyway.

The maelstrom of technological progress presents us with the need to adapt our participation and rhetoric accordingly. Science fiction is a folk language for common experience within a technoscientifically oriented world. These images are imaginative products of social and participatory artist communities who, when marrying the personal and contextual, create speculative objects of general strangeness. Their description is nothing less that one of alien entities – alien entities that are everywhere. Earth is the most sophisticated foreign planet we’ve yet to invent, we just need to discover how to populate it.

###

Erik Zepka is a conceptual artist, researcher and writer working in the intersections of art, literature, science and philosophy. With a new media focus, the growing technologies of the internet form a hub for his interdisciplinary exploration.

http://x-o-x-o-x.com

This article originally appeared here, republished under creative commons license.

The post Science Fiction Realism appeared first on h+ Media.

]]>
http://hplusmagazine.com/2015/07/24/science-fiction-realism/feed/ 0