h+ Magazine http://hplusmagazine.com Fri, 30 Jan 2015 00:01:36 +0000 en-US hourly 1 Exponential Medicine and Digital Healthcare — Nitish Kannan interviews astronaut Dr. Dan Barry, anti-aging scientist Aubrey De Grey and investor Vinod Khosla. http://hplusmagazine.com/2015/01/29/exponential-medicine-digital-healthcare-nitish-kannan-interviews-astronaut-dr-dan-barry-anti-aging-scientist-aubrey-de-grey-investor-vinod-khosla/ http://hplusmagazine.com/2015/01/29/exponential-medicine-digital-healthcare-nitish-kannan-interviews-astronaut-dr-dan-barry-anti-aging-scientist-aubrey-de-grey-investor-vinod-khosla/#comments Thu, 29 Jan 2015 20:43:41 +0000 http://hplusmagazine.com/?p=26384 Nitish Kannan interviews astronaut Dr. Dan Barry, anti-aging scientist Aubrey De Grey and investor Vinod Khosla at the recent Exponential Medicine conference.

The post Exponential Medicine and Digital Healthcare — Nitish Kannan interviews astronaut Dr. Dan Barry, anti-aging scientist Aubrey De Grey and investor Vinod Khosla. appeared first on h+ Magazine.

]]>
Screen Shot 2015-01-29 at 12.27.38 PMMost people know that healthcare has rapidly become an information technology and health and medicine is changing exponentially.Just as technology is democratizing all aspects of life, no field is being disrupted more quickly than healthcare and the implications of digital will bring a trillion dollars worth of disruption to outdated diagnosis, treatment and delivery of healthcare. Moreover, the human race will live long enough soon to live forever as nanotechnology, robotics, Gene sequencing and AI cure age old problems.

 

No conference covers the convergence of exponential technologies more than the last exponential medicine conference I recently attended in Coronado California. Below I will give a never before covered glimpse into why Exponential medicine hands down is Singularity University‘s best conference by a long shot and the only one I’ve attended twice and keep coming back to year after year.

This year a few of the highlights were Dr. Dan Barry, who was a former astronaut and robotics expert discussing his goals to make the human civilization immortal.


This year another major highlight was the announcement of the winner of the Nokia Xsense challenge, in which a company called DMI won. An expert panel of judges selected DMI from 11 finalists, giving DMI the highest combined score in accuracy, consistency, demonstration quality, technical innovation, human factors, market opportunity, originality, and user experience. DMI designed portable diagnostic technology that can carry out hundreds of clinical lab tests on a single drop of blood—and provides accurate results within minutes.

Nothing was more fulfilling for me than becoming good friends Nigel Ackland, who is dubbed the “Bionic Arm Man” by the British press, Nigel Ackland lost his right forearm in an industrial blender in 2006. His elective trans-radial amputation left him without use of his right arm. In 2012 he became one of the first users of the advanced Bebionic hand, which now allows him a wide array of restored hand functions. In other words he’s the real life Deus Ex, and he told me there is no reason why every amputee and war veteran can’t have a bionic arm soon.
Exponential technologies like 3D printing and sensors and smaller chips are enabling sub 1000 dollar custom designed robotic arms which are far superior to a hook and a stick. He is the first true bionic man in some sense. He uses a computer, ties his shoes shakes hands with everyone he meets and even challenged the Navy seals in Coronado to try to pull of the arm and they didn’t succeed. Hearing his heartwarming story and becoming a friend was one of my highlights of the conference and gave me great optimism on where technology is going.Now, speaking about the Navy seals, a few of them were in attendance and even did a seminar. The seals were humbled by the technologies available to them today and told many of the executives and scientists that there was no better place to impact a billion people than through the military and scale technologies and test them. For instance the military would love having prosthetics, drones, exoskeleton suits, and better medical testing and treatments on the battlefield.

 

The Seals were begging developers to build better remote telemedicine options and AI for doing remote battlefield medicine. Getting another look into these brave men and women and how interested they are integrating exponential technologies and medical devices and sensors into battlefield medicine was a humbling experience.
Another highlight of the conference was speaking with Dr. Craig Venter about human longevity and his goal to radically improve the human lifespan, which naturally led me to interview my good friend Dr. Aubrey de Grey on SENS and what he thought about CALICO and Human longevity. Video below.

 

Finally, what always makes exponential medicine the best conference is the thought leaders, investors and executives that really make game changing technologies get funded and executed in the real world. My personal highlight at Exponential medicine was spending time for a couple days and picking my friend Vinod Khosla’s mind on the future of exponential medicine and how radically disrupt medicine using smartphone physicals, telemedicine, and robotics AI and the impact the singularity would have on human civilization. How often do you get to spend a couple days and interview a billionaire and pick their brains on what you’re organization should be doing. Here is my interview below.

 

All in all, what would I describe exponential medicine as? I would call it a mind meld of 197 IQ Walter O’Brien type people from the TV show scorpion hanging out for a few days, speaking on a range of mind blowing topics, beach parties, dinners, lunches and exercise sessions. Heck for the brave a run with the navy seals in the morning. It’s the culmination of a retreat of the mind, body and most importantly a place where you can get a renewed vision for what’s changing in technology and what your company needs to implement right away to keep with the ever changing landscape of exponential technologies in medicine, health, AI, robotics, self driving cars and even fringe technologies that will replace humans and doctors. The future is near are you ready?

 

To learn more or attend look at the link below. Thank you Dr. Daniel Kraft, Rob Nail, Peter Diamandis and all the faculty as usual for making this year as good as the last. To infinitiy and Beyond.http://exponential.singularityu.org/medicine/the-experience/

The post Exponential Medicine and Digital Healthcare — Nitish Kannan interviews astronaut Dr. Dan Barry, anti-aging scientist Aubrey De Grey and investor Vinod Khosla. appeared first on h+ Magazine.

]]>
http://hplusmagazine.com/2015/01/29/exponential-medicine-digital-healthcare-nitish-kannan-interviews-astronaut-dr-dan-barry-anti-aging-scientist-aubrey-de-grey-investor-vinod-khosla/feed/ 0
Different Molecular Profiles of Eudaimonic and Hedonic Happiness http://hplusmagazine.com/2015/01/28/different-molecular-profiles-eudaimonic-hedonic-happiness/ http://hplusmagazine.com/2015/01/28/different-molecular-profiles-eudaimonic-hedonic-happiness/#comments Wed, 28 Jan 2015 20:37:08 +0000 http://hplusmagazine.com/?p=26361 Molecular biologic techniques have been employed to analyze happiness and the results distinguish between two forms of happiness at the molecular level.

The post Different Molecular Profiles of Eudaimonic and Hedonic Happiness appeared first on h+ Magazine.

]]>
AristotleHelping people makes you happy and it might also make you live longer.

Psychological health shows a strong correlation with good physical health and is a powerful predictor of future physical well-being. Conversely, long-term depression and other negative mental states correlate with an increased disease incidence, including Parkinson’s and heart disease, cancer, and strokes. The biological basis for negative psychological states adversely affecting human health are partially understood and include factors such as increased inflammation, immune system suppression, and depression-related poor lifestyle choices, such as overeating, alcohol abuse, and smoking.

The biological correlates of psychological well-being and good health are less well understood. Recently molecular biologic techniques have been employed to analyze psychological well-being and interestingly these techniques distinguish between different forms of happiness at the molecular-biologic level.

Since the time of Ancient Greece, philosophers have distinguished two forms of human well-being; the hedonic and eudaimonic. Early formulations of hedonic happiness are attributed to Aristippus of Cyrene and Epicurus, while Eudaimonic happiness is articulated in Aristotle’s Nicomachean Ethics. In hedonism, happiness is achieved by maximizing pleasure, while minimizing pain. “Ethical hedonists” seek to maximize pleasure while avoiding infringements on the rights or happiness of others.

Eudemonia, sometimes defined as “human flourishing”, consists of happiness achieved by purposeful, rational, ethical action, that strives towards meaning and a noble purpose, and is more than achieving “pleasure, wealth, or honor1” In more recent definitions, hedonism is thought to fulfill basic physiologic and psychologic drives, while eudaemonism fulfills more complex social and cultural desires2,3.

Recently, “genome-wide transcriptional profiling” was used to analyze the expression of ~21,000 genes in 80 healthy individuals, who were selected by psychiatric testing (the “Short Flourishing Scale”) to separate individuals with hedonic vs. eudaimonic happiness3. In this technique, all the genes transcribed into different RNA species (each RNA species corresponds to one of the ~21,000 genes), were identified and their relative levels of expression were quantified. Since many of these RNA species are transcribed into proteins that exert different functions in human cells, this molecular technique gives information about the specific gene and protein expression patterns seen in different individuals. The cells used in the analysis were circulating white blood cells – a cell type often analyzed in molecular psychiatric studies.

Psychological studies of both hedonic and eudaimonic individuals revealed that both groups had good psychological health and low levels of depression, with the hedonic group showing a higher level of psychological health. Both groups reported good overall psychological well-being and factors such as age, gender, and race/ethnicity had no effect on reported or measured well-being.

Interestingly, molecular analysis of hedonic and eudaimonic individuals revealed quite different gene expression patterns. Individuals with high hedonic happiness showed increased pro-inflammatory gene expression and decreased expression of genes associated with antibody synthesis and cellular interferon responses. The gene expression pattern seen in eudemonic individuals showed the opposite pattern – suppressed pro-inflammatory and increased immune function associated gene expression. Further analysis revealed that the cells of hedonic individuals showed increased activity of proteins that increase pro-inflammatory gene expression (called “transcription factors” [NF-kappa B and AP-1]) and decreased activity of transcription factors that regulate immune function (STAT and IRF). Eudemonic individuals showed the opposite transcription factor activity pattern.

Thus, the two types of happiness “set off” different intracellular signaling pathways that result in vastly different cellular gene expression patterns3.

This study is one of the first detailed molecular analyses of human happiness and surprisingly demonstrates molecular differences between forms of happiness defined by philosophers more than 2,400 years ago. It shows that the human genome is more sensitive to the specific “form” of happiness an individual experiences, than the individual’s own consciousness.

Furthermore, the study implies that hedonic and eudaimonic happiness may have different long-term health effects, as pro-inflammatory gene expression combined with immune suppression correlates with an increased risk for many chronic diseases. Thus, happiness based on a sense of connectedness and purpose (eudaimonia) may promote better health than happiness based on hedonism3.

While this research is in its early stages, it might be used in the future in the diagnosis and possible treatment of depression and anhedonic states. The data could also be useful should the human race ever decide to alter gene expression within the human brain to promote greater happiness and better health.

 

References

  1. Aristotle, Nicomachean Ethics, chapter four, paragraph one.
  2. Philos Trans R Soc Lond Biol Sci. 2004;359:1333-47.
  3. Proc Natl Acad Sci USA. 2013;110:13684-89.

 

 

 

 

The post Different Molecular Profiles of Eudaimonic and Hedonic Happiness appeared first on h+ Magazine.

]]>
http://hplusmagazine.com/2015/01/28/different-molecular-profiles-eudaimonic-hedonic-happiness/feed/ 1
The Blind Leading the Blind — The End of Religion Misrecognized http://hplusmagazine.com/2015/01/28/blind-leading-blind-end-religion-misrecognized/ http://hplusmagazine.com/2015/01/28/blind-leading-blind-end-religion-misrecognized/#comments Wed, 28 Jan 2015 18:03:53 +0000 http://hplusmagazine.com/?p=26353 Religion is not the enemy of the future. Neither is it the friend of the future.

The post The Blind Leading the Blind — The End of Religion Misrecognized appeared first on h+ Magazine.

]]>

[Editor’s note: this is a response to a recent article, The End of Religion: Technology and the Future]

So much anti-religious dogmatism, so much misrecognized religiosity, so little time. It’s a wonder to me that some clearly sophisticated persons can express such unsophisticated opinions about religion. Maybe it’s just because we all have vested interests? On the one hand, those who have distanced themselves from tradition seek to justify their choice, as those who have continued to embrace tradition likewise would justify themselves. What’s to be made of the strange creatures, arguably not so uncommon now or ever, that reject any notion of the choice being all or nothing or even mutually exclusive? What’s to be made of emerging culture that would both conserve and discard tradition in various ways to various extents?


History is littered with dead gods, so we say. And yet those gods seem to keep resurrecting themselves or each other, in an ongoing theological evolution. From the earliest inarticulate longings for transcendence, through acknowledged superhuman projections, and perhaps on into recursively self-realizing aspirations of superintelligent creation: the past, present, and hopefully the future are littered with living gods.Will living gods endure? Will our descendants be too advanced to share such primitive beliefs? Maybe so, if we caricature the gods and their function as nothing more than supernatural superstition. On the other hand, will it ever be primitive to persist in trust that we can yet further transcend ourselves? If not, the gods will endure. Oh, they’ll surely change. They’ll evolve as they always have. And death or its analogs will continue to be part of evolution as they always have been. But intelligence, so long as it can, will continue to do what intelligence does: optimize across environments for its ultimate goals. And those goals are its gods, by any other name as divine — don’t like “divine” then go with “sublime” or whatever makes you happy while preserving the function.Some suppose that the relevance of religion relies on suffering and death, and that suffering and death will be eradicated by emerging technology. I share their expectations regarding the eradication of suffering and death, so long as they’re willing to qualify “suffering and death” with “as we experience them presently”. If they’re not willing to make that qualification, I’m confident they’re in for some serious disillusionment, assuming they live long enough. A world without the analogs of suffering and death, without goals and change, is no world at all. Nihilism lies in that direction.

Beyond that, to suggest the relevance of religion relies on suffering and death is to under-appreciate the power of religion. The reasons are converses to those for which it would be nihilistic to suppose analogs of suffering and death could be meaningfully annihilated. The relevance of religion is in thriving and life. Life is indeed meaningless outside a context that includes analogs of suffering and death, but thriving is so much more than mere survival. Those of us who insist on explaining evolution in terms of mere survival have a ways to go, I contend.

In any case, in that superintelligent future, who will pray for solutions? Those who have moved on to new problems. Who will hope for grace from gods when science offers immortality? Those who recognize science can offer immortality without grace only to the extent it can offer a solution to the heat death of the universe without cooperation. Who will find their answers in ancient scriptures? Those who recognize new scripture informed of old scripture is always being written. It seems to me there will be plenty of praying for grace from gods into conceivable futures. But, but, but no one will call these things “scripture” or “prayer” or “grace” or “gods”? Meh. Those words have existed no longer than the age of the English language, and yet the phenomena they describe are uncontroversially far more ancient. Of course the words will change. That’s uninteresting. What’s interesting is whether and how and to what extent the associated phenomena will persist.

Those who point to technological assassination of God always do so in the most ironic way. God has been dying and will die, they say, because we will achieve all the powers we attribute to God. Uh. Ahem. Don’t they see?!? By supposing technology capable of endowing us with the power of God, they are supposing us capable of becoming functionally equivalent to God. Functionally, they’re appealing to the existence of God as an attempt to prove the non-existence of God! Squirm as they might, calling this “godlike” rather than “God”, the irony remains.

In overcoming present notions of suffering and death, we would not make ancient aspirations of how to achieve such ends irrelevant. In confronting apocalyptic risk, messianic wonder, and millenarian opportunity, religion does not somehow come to an end. To the contrary, we would simply justify those fears and hopes at last, as prophecies fulfilled! Did we come up with the idea that overcoming them is possible? Did we do all the work that led, step by step, to our present capabilities? Did the old gods not inspire our ancestors along the way? Are not the new gods inspiring us still? Abstracting across old and new gods, is not the living God always leading us toward transformation?

Okay, yeah. I get it. It’s easier to damn the old gods, misrecognize the new ones, and think lazily about function. But can we still, with a straight face, claim that prayer and ideology will not help? Can we embrace the reactionary anti-religious ideology without recognizing it as an ideology? Can we imagine the end of religion without recognizing such as a prayer? Can we raise our arms, stick out our chests, and pronounce moral obligation to relinquish religious beliefs with the fullest intent of provoking a communal strenuous mood to that end, and yet not recognize in our own behavior the persistence of the religious impulse?

Some of us manage to get away with this because we’ve somehow persuaded ourselves that religion is mostly or exclusively the silly and oppressive stuff that religious persons have thought and done. If a large group of persons support each other in thinking stupid thoughts, that’s religion! If a whole bunch of us wind each other up to hurt themselves or each other, that’s religion! If we come up with an organized way of opposing intellectual, technological, or moral progress then that, above all, that is religion!!! And yet no. Sorry. That’s not religion.

Yes, religion has been and is used to think and do stupid and oppressive and backwards and downwards. Yes, it’s been used to think and do such to greater extents than any of us is capable alone. And yes, it seems nothing can compare to the power of religion to inspire evil. Yet that’s just the point: power to inspire. Esthetic power. That is religion. And it has also been and is used to think and do smart and empowering and forwards and upwards. The same power that killed Platonic science in Athens elevated Platonic science in Rome. The same power that destroyed Aristotelian science in Christian Europe preserved it in Muslim Africa. The same power that tempted influential Arabs to dogmatism inspired influential Europeans to enlightenment. Religion may yet damn us. It is powerful enough for that. And it may yet save us. It’s powerful enough for that too. That’s because it’s not inherently good or evil. It becomes good or evil as we apply its esthetic power for good or evil. And in doing so, we amplify the good and the evil.

Religion is not the enemy of the future. Neither is it the friend of the future. Religion is simply, and quite powerfully, going to be part of the future, whether anyone likes it or not. So indeed, if we are to survive and progress, we had better figure out how to mitigate the risks and pursue the opportunities that religion presents. Leaving it to chance could be even more disastrous than leaving nuclear weapons or synthetic biology or molecular engineering or machine intelligence to chance. When we leave power to chance, we’re really just leaving it in someone else’s hands. And that someone else might not have our interests in mind or in heart.

So let’s learn through science, and let’s govern through ethics, not confusing either of these with religion or as being incompatible with religion. Let’s also reach deep into ourselves and broadly throughout time and space to make and find stories that move us together. Let’s embrace these stories, building redemptively from ancient aspirations, through cultural and technological evolution, and on into cosmic evolution, that fill us with enduring passion to act strenuously. Let’s adopt these stories that provoke us to nothing less than overflowing courage in the work to raise each other together beyond present notions of suffering and death to our transformation as radically compassionate creators. And let’s recognize these stories as our religion, both old and new, together pointing the way to our potential with, in, and as God.

###

The post The Blind Leading the Blind — The End of Religion Misrecognized appeared first on h+ Magazine.

]]>
http://hplusmagazine.com/2015/01/28/blind-leading-blind-end-religion-misrecognized/feed/ 0
Semantic Culturomics http://hplusmagazine.com/2015/01/27/semantic-culturomics/ http://hplusmagazine.com/2015/01/27/semantic-culturomics/#comments Tue, 27 Jan 2015 21:04:03 +0000 http://hplusmagazine.com/?p=26339 Newspapers are testimonials of history. The same is increasingly true of social media such as online forums, online communities, and blogs. By looking at the sequence of articles over time, one can discover the birth and the development of trends that marked society and history – a field known as “Culturomics”. But Culturomics has so far been limited to statistics on keywords.

The post Semantic Culturomics appeared first on h+ Magazine.

]]>
 culturomics-media-2

ABSTRACT

Newspapers are testimonials of history. The same is increasingly true of social media such as online forums, online communities, and blogs. By looking at the sequence of articles over time, one can discover the birth and the development of trends that marked society and history – a field known as “Culturomics”. But Culturomics has so far been limited to statistics on keywords. In this vision paper, we argue that the advent of large knowledge bases (such as YAGO [37], NELL [5], DBpedia [3], and Freebase) will revolutionize the field. If their knowledge is combined with the news articles, it can breathe life into what is otherwise just a sequence of words for a machine. This will allow discovering trends in history and culture, explaining them through explicit logical rules, and making predictions about the events of the future. We predict that this could open up a new field of research, “Semantic Culturomics”, in which no longer human text helps machines build up knowledge bases, but knowledge bases help humans understand their society.

1. INTRODUCTION

Newspapers are testimonials of history. Day by day, news articles record the events of the moment – for months, years, and decades. The same is true for books, and increasingly also for social media such as online forums, online communities, and blogs. By looking at the sequence of articles over time, one can discover the trends, events, and patterns that mark society and history. This includes, e.g., the emancipation of women, the globalization of markets, or the fact that upheavals often lead to elections or civil wars. Several projects have taken to mining these trends. The Culturomics project [27], e.g., mined trends from the Google Book Corpus. We can also use the textual sources to extrapolate these trends to the future. Twitter data has been used to make predictions about election results, book sales, or consumer behavior. However, all of these analyses were mostly restricted to the appearance of keywords. The analysis of the role of genders in [27], for instance, was limited to comparing the frequency of the word “man” to the frequency of the word “woman” over time. It could not find out which men and women were actually gaining importance, or in which professions. This brings us to the general problem of such previous analyses: They are mostly limited to counting the occurrences of words. So far, no automated approach can actually bring deep insight into the meaning of news articles over time. Meaning, however, is the key for understanding roles of politicians, interactions between people, or reasons for conflict. For example, if a sentence reads “Lydia Taft cast her vote in 1756”, then this sentence gets its historic value only if we know that Lydia Taft was a woman, and that she lived in the United States, and that women’s suffrage was not established there until 1920. All of this information is lost if we count just words.

Screen Shot 2015-01-27 at 12.20.44 PM

 

We believe that this barrier will soon be broken, because we now have large commonsense knowledge bases (KBs) at our disposal: YAGO [37], NELL [5], TextRunner [4], DBpedia [3], and Freebase (http://freebase.com). These KBs contain knowledge about millions of people, places, organizations, and events. The creation of these KBs is an ongoing endeavor. However, with this vision paper, we take a step ahead of these current issues in research, and look at what can already be achieved with these KBs: If their knowledge is combined with the news articles, it can breathe life into what is otherwise just a sequence of words for a machine. Some news organisations1 participate already in the effort of annotating textual data with entities from KBs. Once people, events, and locations have been identified in the text, they can be unfolded with the knowledge from the KB. With this combination, we can identify not just the word “woman”, but actually mentions of people of whom the KB knows that they are female. Figure 1 shows a proof of concept that we conducted on YAGO and the French newspaper Le Monde [15]. It looks at all occurrences of people in the articles, and plots the proportion of women both in general and among politicians. Women are mentioned more frequently over time, but the ratio is smaller among politicians. Such a detailed analysis is possible only through the combination of textual data and semantic knowledge.

2. SEMANTIC CULTUROMICS

Semantic Culturomics is the large-scale analysis of text documents with the help of knowledge bases, with the goal of discovering, explaining, and predicting the trends and events in history and society.

Semantic Culturomics could for example answer questions such as: “In which countries are foreign products most prevalent?” (where the prevalence can be mined from the news, and the producer of a product, as well as its nationality, comes from the KB), “How long do celebrities usually take to marry?” (where co-occurrences of the celebrities can be found in blogs, and the date of marriage and profession comes from the KB), “What are the factors that lead to an armed conflict?” (where events come from newspapers, and economic and geographical background information comes from the KB), “Which species are likely to migrate due to global warming?” (where current sightings and environmental conditions come from textual sources, and biological information comes from the KB). None of these queries can be answered using only word-based analysis. The explanations that Semantic Culturomics aims at could take the form of logical rules such as “A politician who was involved in a scandal often resigns in the near future”. Such rules can explain particular past events by pointing to a general pattern of history with past instances. They can also be used to make predictions, and to deliver an explication as to why a certain prediction is made. Semantic Culturomics would turn around a long-standing paradigm: Up to now, all information extraction projects strive to distill computer-understandable knowledge from the textual data of the Web. Seen this way, human-produced text helps computers structure and understand this world. Semantic Culturomics would put that paradigm upside down: It is no longer human text that helps computers build up knowledge, but computer knowledge that helps us understand human text – and with it human history and society.

3. STATE OF THE ART

Digital Humanities and Culturomics. The Digital Humanities make historical data digitally accessible in order to compare texts, visualize historic connections, and trace the spread of new concepts. The seminal paper in this area, [27], introduced the concept of “Culturomics” as the study of cultural trends through the quantitative analysis of digitized texts. This work was the first large-scale study of culture through digitized texts. Yet, as explained above, it remains bound to the words of the text. The work has since been advanced [20, 1], but still remains confined to counting occurrences and co-occurrences of words. Closer to our vision, the GDELT project [21] annotates news articles with entities and event types for deeper analysis. The focus is on the visualisation of trends. In contrast, Semantic Culturomics aims also at providing explanations for events, which become possible by the background knowledge from the KB.

Event prediction. A recent work [33] mined the New York Times corpus to predict future events. This work was the first that aimed at predicting (rather than modeling) events. Of particular interest is the ability to bound the time point of the predicted events. The authors make use of key phrases in the text, as well as semantic knowledge to some degree. A recent follow-up work [34] extended the analysis to Web queries. Another approach modeled causality of events by using background data from the Linked Open Data cloud [32]. These works were the first to address the prediction of events at large scale. [32] goes a long way towards the identification of events and causality. In a similar vein, Recorded Future2 , a company, has specialised in the detection and the prediction of events with the help of a KB [36]. However, these works built classifiers for predictions rather than explicit patterns in the form of logical rules that we aim at. Furthermore, Semantic Culturomics would model the interplay between text and semantic knowledge in a principled way, and thus unify the prediction of future events with the modeling of past trends.

Predictive analytics. Businesses and government agencies alike analyze data in order to predict people’s behavior3 . There is a business-oriented conference4 dedicated to these projects. Therefore, we believe that this endeavor should preferably be studied also in a public, academic, space. Furthermore, predictive analytics is mostly centered on a specific task in a specific domain. A model that can predict sales of a certain product cannot be used to predict social unrest in unstable countries. Semantic Culturomics, in contrast, aims at a broader modeling of the combination of textual sources and knowledge bases.

Social Media Analysis. Recently, researchers have increasingly focused on social media to predict social trends and social movements. They have used Twitter data and blogs to predict crowd phenomena, including illnesses [18], box office sales, the stock market, consumer demand, book sales, consumer behavior, and public unrest (see, e.g., [16] and references therein). Other Web data has been used to predict the popularity of a news article [13] or to analyze elections [39]. These works have demonstrated the value of Twitter for event prediction. However, they always target a particular phenomenon. We believe that what is needed is a systematic and holistic study of textual data for both explanation of the past and prediction of the future.

Machine Reading. Several projects have looked into mining the Web at large scale for facts [5, 4, 28, 38]. Recent work has mined the usual order of events from a corpus [40], the precedence relationships between facts in a KB [41], and implicit correlations in a KB [19]. Several of these methods can be of use for Semantic Culturomics. However, they can only be an ingredient to the project, because Semantic Culturomics aims at mining explicit logical rules, together with a temporal dimension, from text and KBs. Enabling Technologies. Our vision of Semantic Culturomics can build on techniques from entity recognition, event detection, rule mining, and information extraction. We detail next how these techniques would have to be advanced.

4. CHALLENGES

Mining text in combination with knowledge bases is no easy endeavor.

The key challenges would be as follows:

Modeling hybrid data. KBs contain knowledge about entities, facts, and sometimes logical axioms. Text, on the other hand, determines the importance of an entity, the cooccurrence of entities, the location of entities in time, the type of events in which an entity is involved, the topic of an entity, and the actions of entities. Thus, Semantic Culturomics has to operate on a hybrid space of textual data and semantic knowledge. KB information is usually represented in RDF. RDF, however, cannot model time, let alone textual content. Other approaches can represent hybrid data, but do not allow KB-style reasoning [22, 12, 44, 6].

Semantic Culturomics calls for a new data model, which can represent entities and their mentions, textual patterns between entities, the dimension of time, and out-of-KB entities. In analogy to an OLAP data cube, this data model could be called a “Semantic Cube”. It should support truly hybrid query operations such as: do a phrase matching to find all text parts that contain names of entities with a certain property; choose one out of several disambiguations for a mention; given a logical rule, remove all facts that match the antecedent, and replace them by the succedent; dice the cube so that the text contains all paraphrases of a relation name. The goal is to develop a query language that subsumes all types of analyses that can be of interest on hybrid data of text and semantic KBs in general.

Identify events and entities. Given, for example, a history of the newspaper articles of a certain region, we want to be able to predict the crime rate, voting patterns, or the rise of a certain person to political prominence. In order to mine trends from a given text corpus, we have to develop methods that can load the textual data (jointly with the structured knowledge) into a Semantic Cube. This requires first and foremost the identification of entities and events in the textual corpora.

There is a large body of prior work on information extraction, and on event mining in news articles [7, 24, 43]. However, most of this work is non-ontological: It is not designed to connect the events to types of events and to entities of the KB. Several works have addressed the problem of mapping entity mentions to known entities in the KB (e.g., [14, 26]). However, these works can deal only with entities that are known to the KB. The challenge remains to handle new entities with their different names. For example, if Lady Gaga is not in the KB and is mentioned in the text, we want to create a new entity Lady Gaga. However, if we later find Stefani Germanotta, in the text, then we do not want to introduce a new entity, but rather record this mention as an occurrence of Lady Gaga with a different name.

Empower rule mining. The goal of Semantic Culturomics is not only to mine trends, but also to explain them. These explanations will take the form of logical rules, weighted with confidence and support measures. Rule mining, or inductive logic programming, has been studied in a variety of contexts [11, 23, 29, 10, 8, 31, 17]. Yet, for Semantic Culturomics we envision rules that cannot be mined with current approaches.

We would like to mine numerical rules such as “Mathematicians publish their most remarkable works before their 36th anniversary”, or “The spread between the imports and the exports of a country correlates with its current account deficit”. Previous work on numeric rule mining [31, 25] was restricted to learning intervals for numeric variables. Other approaches can learn a function [17, 9], but have been tested only on comparatively small KBs (less than 1000 entities) – far short of the millions of entities that we aim at.

We also aim to mine temporal rules such as “An election is followed by the inauguration of a president”. These should also predict the time of validity of literals. First work in this direction [30] has been tried on just toy examples.

Another challenge is to mine of rules with existential variables, such as “People usually have a male father and a female mother”. Such rules have to allow several literals in the succedent, meaning that Horn rule mining approaches and concept learning approaches become inapplicable. Statistical schema induction [42] can provide inspiration, but has not addressed existential rule learning in general.

We would also need rules with negation, such as “People marry only if they are yet not married”. Such rules have been studied [31], but not under the Open World Assumption. In this setting, learning rules with negation risks learning the patterns of incompleteness in the KB rather than negative correlations in reality. Furthermore, there exist many more statements outside the KB than inside in the KB, meaning that we risk mining a large number of irrelevant negative statements.

Finally, we want to mine rules that take into account the textual features that the hybrid space brings. These are features such as the importance of an entity or the textual context in which an entity (or a pair of entities) appears. [35] mines rules on textual phrases, but does not take into account logical constraints from the KB. If we succeed in mining rules that take into account textual features, the reward will be highly attractive: Finally, we will be able to explain why a certain event happened – by giving patterns that have led to this type of events in the past.

Privacy. Predicting missing facts means also that some facts will no longer be private. For instance, consider a rule that can predict the salary of a person given the diploma, the personal address, and the employment sector. Smart social applications could warn the user when she discloses information that, together with already disclosed information, allows predicting private data. The intuition is that automatic rule mining could reveal surprising rules that humans may not directly see or may ignore, as shown in [2].

5. CONCLUSION

In this vision paper, we have outlined the idea of Semantic Culturomics, a paradigm that uses semantic knowledge bases in order to give meaning to textual corpora such as news and social media. This idea is not without challenges, because it requires the link between textual corpora and semantic knowledge, as well as the ability to mine a hybrid data model for trends and logical rules. If Semantic Culturomics succeeds, however, it would add an interesting twist to the digital humanities: semantics. Semantics turns the texts into rich and deep sources of knowledge, exposing nuances that today’s analyses are still blind to. This would be of great use not just for historians and linguists, but also for journalists, sociologists, public opinion analysts, and political scientists. They could, e.g., search for mentions of politicians with certain properties, for links between businessmen and judges, or for trends in society and culture, conditioned by age of the participants, geographic location, or socio-economic indicators of the country. Semantic Culturomics would bring a paradigm shift, in which no longer human text is at the service of knowledge bases, but knowledge bases are at the service of human understanding.

6. REFERENCES

[1] O. Ali, I. N. Flaounas, T. D. Bie, N. Mosdell, J. Lewis, and N. Cristianini. Automating news content analysis: An application to gender bias and readability. In WAPA, 2010.

[2] N. Anciaux, B. Nguyen, and M. Vazirgiannis. Limiting data collection in application forms: A real-case application of a founding privacy principle. In PST, 2012.

[3] S. Auer, C. Bizer, G. Kobilarov, J. Lehmann, R. Cyganiak, and Z. G. Ives. DBpedia: A Nucleus for a Web of Open Data. In ISWC, 2007.

[4] M. Banko, M. J. Cafarella, S. Soderland, M. Broadhead, and O. Etzioni. Open Information Extraction from the Web. In IJCAI, 2007.

[5] A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E. R. H. Jr., and T. M. Mitchell. Toward an architecture for never-ending language learning. In AAAI, 2010.

[6] D. Colazzo, F. Goasdou´e, I. Manolescu, and A. Roatis. Rdf analytics: Lenses over semantic graphs. In WWW, 2014.

[7] A. Das Sarma, A. Jain, and C. Yu. Dynamic relationship and event discovery. In WSDM, 2011.

[8] L. Dehaspe and H. Toironen. Discovery of relational association rules. In Relational Data Mining. 2000.

[9] N. Fanizzi, C. d’Amato, and F. Esposito. Towards numeric prediction on owl knowledge bases through terminological regression trees. In ICSC, 2012.

[10] L. Gal´arraga, C. Teflioudi, K. Hose, and F. M. Suchanek. Amie: association rule mining under incomplete evidence in ontological knowledge bases. In WWW, 2013.

[11] B. Goethals and J. Van den Bussche. Relational association rules: getting warmer. In Pattern Detection and Discovery. 2002.

[12] J. Han. Mining heterogeneous information networks by exploring the power of links. In ALT, 2009.

[13] E. Hensinger, I. Flaounas, and N. Cristianini. Modelling and predicting news popularity. Pattern Anal. Appl., 16(4), 2013.

[14] J. Hoffart, M. A. Yosef, I. Bordino, H. F¨urstenau, M. Pinkal, M. Spaniol, B. Taneva, S. Thater, and G. Weikum. Robust disambiguation of named entities in text. In EMNLP, 2011.

[15] T. Huet, J. Biega, and F. M. Suchanek. Mining history with le monde. In AKBC, 2013.

[16] N. Kallus. Predicting crowd behavior with big public data. In WWW, 2014.

[17] A. Karaliˇc and I. Bratko. First order regression. Machine Learning, 26(2-3), 1997.

[18] V. Lampos and N. Cristianini. Nowcasting events from the social web with statistical learning. ACM Trans. Intell. Syst. Technol., 3(4), Sept. 2012.

[19] N. Lao, T. Mitchell, and W. W. Cohen. Random walk inference and learning in a large scale knowledge base. In EMNLP, 2011.

[20] K. Leetaru. Culturomics 2.0: Forecasting large-scale human behavior using global news media tone in time and space. First Monday, 16(9), 2011.

[21] K. Leetaru and P. Schrodt. Gdelt: Global data on events, language, and tone, 1979-2012. In International Studies Association Annual Conference, 2013.

[22] C. X. Lin, B. Ding, J. Han, F. Zhu, and B. Zhao. Text cube: Computing ir measures for multidimensional text database analysis. In ICDM, 2008.

[23] F. A. Lisi. Building rules on top of ontologies for the semantic web with inductive logic programming. Theory and Practice of Logic Programming, 8(3), 2008.

[24] W. Lu and D. Roth. Automatic event extraction with structured preference modeling. In ACL, 2012.

[25] A. Melo, M. Theobald, and J. Voelker. Correlation-based refinement of rules with numerical attributes. In FLAIRS, 2014.

[26] P. N. Mendes, M. Jakob, A. Garcia-Silva, and C. Bizer. Dbpedia spotlight: shedding light on the web of documents. In ICSS, 2011.

[27] J.-B. Michel, Y. K. Shen, A. P. Aiden, A. Veres, M. K. Gray, T. G. B. Team, J. P. Pickett, D. Holberg, D. Clancy, P. Norvig, J. Orwant, S. Pinker, M. A. Nowak, and E. L. Aiden. Quantitative analysis of culture using millions of digitized books. Science, 331(6014), 2011.

[28] N. Nakashole, M. Theobald, and G. Weikum. Scalable knowledge harvesting with high precision and high recall. In WSDM, 2011.

[29] V. Nebot and R. Berlanga. Finding association rules in semantic web data. Knowledge-Based Systems, 25(1), 2012.

[30] M. C. Nicoletti, F. O. S. de S´a Lisboa, and E. R. H. Jr. Automatic learning of temporal relations under the closed world assumption. Fundam. Inform., 124(1-2), 2013.

[31] J. R. Quinlan. Learning logical definitions from relations. Machine learning, 5(3), 1990.

[32] K. Radinsky, S. Davidovich, and S. Markovitch. Learning to predict from textual data. J. Artif. Intell. Res., 45, 2012.

[33] K. Radinsky and E. Horvitz. Mining the web to predict future events. In WSDM, 2013.

[34] K. Radinsky, K. M. Svore, S. T. Dumais, M. Shokouhi, J. Teevan, A. Bocharov, and E. Horvitz. Behavioral dynamics on the web: Learning, modeling, and prediction. ACM Trans. Inf. Syst., 31(3), 2013.

[35] S. Schoenmackers, O. Etzioni, D. S. Weld, and J. Davis. Learning first-order horn clauses from web text. In EMNLP, 2010.

[36] Staffan Truv´e. Big Data For the Future: Unlocking the Predictive Power of the Web. Technical report, Recorded Future, 2011.

[37] F. M. Suchanek, G. Kasneci, and G. Weikum. YAGO: A core of semantic knowledge – unifying WordNet and Wikipedia. In WWW, 2007.

[38] F. M. Suchanek, M. Sozio, and G. Weikum. Sofie: a self-organizing framework for information extraction. In WWW, 2009.

[39] S. Sudhahar, T. Lansdall-Welfare, I. N. Flaounas, and N. Cristianini. Electionwatch: Detecting patterns in news coverage of us elections. In EACL, 2012.

[40] P. P. Talukdar, D. T. Wijaya, and T. M. Mitchell. Acquiring temporal constraints between relations. In CIKM, 2012.

[41] P. P. Talukdar, D. T. Wijaya, and T. M. Mitchell. Coupled temporal scoping of relational facts. In WSDM, 2012.

[42] J. V¨olker and M. Niepert. Statistical schema induction. In ESWC, 2011.

[43] D. Wang, T. Li, and M. Ogihara. Generating pictorial storylines via minimum-weight connected dominating set approximation in multi-view graphs. In AAAI, 2012. [44] P. Zhao, X. Li, D. Xin, and J. Han. Graph cube: on warehousing and olap multidimensional networks. In SIGMOD, 2011.

Footnotes

1 http://developer.nytimes.com/docs/semantic_api

2 http://www.recordedfuture.com

3 http://www.forbes.com/sites/gregpetro/2013/06/13/ what-retail-is-learning-from-the-nsa/

4 http://www.predictiveanalyticsworld.com/

###

This work is licensed under the Creative Commons Attribution NonCommercial-NoDerivs 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/3.0/. Obtain permission prior to any use beyond those covered by the license. Contact copyright holder by emailing info@vldb.org. Articles from this volume were invited to present their results at the 40th International Conference on Very Large Data Bases, September 1st – 5th 2014, Hangzhou, China. Proceedings of the VLDB Endowment, Vol. 7, No. 12 Copyright 2014 VLDB Endowment 2150-8097/14/08.

The post Semantic Culturomics appeared first on h+ Magazine.

]]>
http://hplusmagazine.com/2015/01/27/semantic-culturomics/feed/ 0
R.U. Sirius and Jay Cornell on Coast to Coast AM — Highlight http://hplusmagazine.com/2015/01/27/r-u-sirius-jay-cornell-coast-coast-highlight/ http://hplusmagazine.com/2015/01/27/r-u-sirius-jay-cornell-coast-coast-highlight/#comments Tue, 27 Jan 2015 18:04:50 +0000 http://hplusmagazine.com/?p=26336 Former h+ Magazine editor R.U. Sirius and co-author Jay Cornell appeared on the alternative radio program Coast to Coast AM last night.

The post R.U. Sirius and Jay Cornell on Coast to Coast AM — Highlight appeared first on h+ Magazine.

]]>
Screen Shot 2015-01-26 at 3.09.59 PM
transendance

Former h+ Magazine editor R.U. Sirius and co-author Jay Cornell appeared on the alternative radio program Coast to Coast AM last night. Here’s a highlight where R.U. and Jay sensibly discuss both current AI progress and risks.

Trigger warning: the highlight clip that follows is a discussion of bigfoot. Yikes.

 

 

 

The post R.U. Sirius and Jay Cornell on Coast to Coast AM — Highlight appeared first on h+ Magazine.

]]>
http://hplusmagazine.com/2015/01/27/r-u-sirius-jay-cornell-coast-coast-highlight/feed/ 0
Read the First Chapter of the Transhumanist Reader for Free http://hplusmagazine.com/2015/01/27/read-first-chapter-transhumanist-reader-free/ http://hplusmagazine.com/2015/01/27/read-first-chapter-transhumanist-reader-free/#comments Tue, 27 Jan 2015 17:33:27 +0000 http://hplusmagazine.com/?p=26323 The first chapter of the The Transhumanist Reader – the first comprehensive overview of transhumanism, edited by Natasha Vita-More and Max More has been made available online for free by the publisher John Wiley.

The post Read the First Chapter of the Transhumanist Reader for Free appeared first on h+ Magazine.

]]>
transhumanist reader

The first chapter of the The Transhumanist Reader, the first comprehensive overview of transhumanism, edited by h+ Chair Natasha Vita-More and Max More,  has been made available online for free by the publisher John Wiley.

Download the first chapter, The Philosophy of Transhumanism, and read it for free here.

Want more?

Get your copy of this groundbreaking book here.

The post Read the First Chapter of the Transhumanist Reader for Free appeared first on h+ Magazine.

]]>
http://hplusmagazine.com/2015/01/27/read-first-chapter-transhumanist-reader-free/feed/ 0
The End of Religion: Technology and the Future http://hplusmagazine.com/2015/01/26/end-religion-technology-future/ http://hplusmagazine.com/2015/01/26/end-religion-technology-future/#comments Mon, 26 Jan 2015 18:26:56 +0000 http://hplusmagazine.com/?p=26309 History is littered with dead gods.

The post The End of Religion: Technology and the Future appeared first on h+ Magazine.

]]>
047-Old-Gods-are-DeadHistory is littered with dead gods. The Greek and Roman gods, and thousands of others have perished. Yet Allah, Yahweh, Krishna and a few more survive. But will belief in the gods endure? It will not. Our descendents will be too advanced to share such primitive beliefs.

If we survive and science progresses, we will manipulate the genome, rearrange the atom, and augment the mind. And if science defeats suffering and death, religion as we know it will die. Without suffering and death, religion will have lost its raison d’être. For who will pray for heavenly cures, when the cures already exist on earth? Who will die hoping for a reprieve from the gods, when science offers immortality? With the defeat of death, science and technology will have finally triumphed over superstition. Our descendents will know, once and for all, that they are stronger than imaginary gods.

As they continue to evolve our post-human progeny will become increasingly godlike. They will overcome human physical and psychological limitations, and achieve superintelligence, either by modifying their brains or interfacing with computers. While we can’t know this for sure, what we do know is that the future will not be like the past. From our perspective, if science and technology continue to progress, our offspring will come to resemble us about as much as we do the amino acids from which we sprang.

As our descendents distance themselves from their past, they will lose interest in the gods. Such primitive ideas may even be unthinkable for them. Today the gods are impotent, tomorrow they’ll be irrelevant. You may doubt this. But do you really think that in a thousand or a million years your descendents, travelling through an infinite cosmos with augmented minds, will find their answers in ancient scriptures? Do you really think that powerful superintelligence will cling to the primitive mythologies that once satisfied ape-like brains? Only the credulous can believe such things. In the future gods will exist … only if we become them.

Still the future is unknown. Asteroids, nuclear war, environmental degradation, climate change or deadly viruses and bacteria may destroy us. Perhaps the machine intelligences we create will replace us. Or we might survive but create a dystopia. None of these prospects is inviting, but they all entail the end of religion.

Alternatively, in order to maintain the status quo, some combination of neo-Luddites, political conservatives or religious fanatics could destroy past knowledge, persecute the scientists, censor novel ideas, and usher in a new Dark Ages of minimal technology, political repression and antiquated religion. But even if they were successful, this would not save them or their archaic ideas. For killer asteroids, antibiotic-resistant bacteria or some other threat will inevitably emerge. And when it does only science and technology will save us—prayer or ideology will not help. Either we evolve or we will die.

But must we relinquish religious beliefs now, before science defeats death, before we become godlike? We may eventually outgrow religious beliefs, but why not allow their comforts to those who still need them? If parents lose a child or children lose a parent, what’s wrong with telling them they’ll be reunited in heaven? I am sympathetic with noble lies, sometimes they are justified. If a belief helps you and doesn’t hurt others, it is hard to gainsay.

Still religious consolation has a price. Religion, and conservative philosophies in general, typically opposes intellectual, technological and moral progress. Religion has fought against free speech, democracy, the eradication of slavery, sex education, reproductive technologies, stem cell research, women’s and civil rights, and the advancement of science. It has been aligned with inquisitions, war, human sacrifice, torture, despotism, child abuse, intolerance, fascism, and genocide. It displays a fondness for the supernatural, authoritarian, misogynistic, hierarchical, anti-democratic, anti-intellectual, anti-scientific, and anti-progressive. Religion has caused an untold amount of misery.

One could even argue that religious beliefs are the most damaging beliefs possible. Consider that Christianity rose in power as the Roman Empire declined, resulting in the marginalization of the Greek science that the Romans had inherited. If the scientific achievements of the Greeks had been built upon throughout the Middle Ages, if science had continued to advance for those thousand years, we might live in an unimaginably better world today. Who knows how many diseases would be cured by now? Who knows how advanced our intellectual and moral natures might be? Maybe we would have already overcome death. Maybe we still die today because of religion.

The cultural domination by Christianity during the Middle Ages resulted in some of the worst conditions known in human history. Much the same could be said of religious hegemony in other times and places. And if religion causes less harm in some places today than it once did, that’s because it has less power than it used to. Were that power regained, the result would surely be disastrous, as anyone who studies history or lives in a theocracy will confirm. Put simply, religion is an enemy of the future. If we are to survive and progress, ideas compatible with brains forged in the Pleistocene must yield. We shouldn’t direct our gaze to the heavens but to the earth, where the real work of making a better world takes place.

Of course religion is not the only anti-progressive force in the world—there are other enemies of the future. Some oppose progressive ideas even if they are advanced by the religious. Consider how political conservatives, virtually all of whom profess to be Christians, denounce Pope Francis’ role in re-establishing Cuban-American relations, his criticism of unfettered capitalism and vast income inequality, and his warnings about the dangers of climate change. The plutocrats and despots hate change, especially if it affects their wallets. The beneficiaries of the status quo don’t want a better world—they like the one they have.

How then do we make a better world? What will guide us in this quest? For there to be a worthwhile future we need at least three things: 1) knowledge of ourselves and the world; 2) ethical values that promote the flourishing of conscious beings; and 3) a narrative to give life meaning. But where do we find them?

Knowledge comes from science, which is the only cognitive authority in the world today. Science explains forces that were once dark and mysterious. It reveals the vast immensity, history and future of the cosmos. It explains our biological origins and the legacy that evolutionary history leaves upon our thoughts and behaviors. It tells us how the world works independent of ideology or prejudice. And applied science is technology, which gives us the power to overcome limitations and make a better future. If you want to see miracles, don’t go to Lourdes, look inside your cell phone.

Ethical values do not depend on religion. The idea that people can’t be moral without religion is false, no matter how many think otherwise. The claim that morality is grounded in religion is also false, as can easily be demonstrated. Ethical values and behaviors arose in our evolutionary history, where they may also find their justification. Yes, the moral-like behaviors sometimes favored by evolution have also been prescribed by religion— cooperation and altruism come to mind—but the justification of these values is biological and social, not supernatural. We are moral because, for the most part, it’s in our self-interest. We all do better, if we all cooperate. Everyone can endorse values that aid our survival and flourishing—even our godlike descendents.

Finally we need a new narrative to replace outdated religious ones—a narrative to give our lives meaning and purpose. We need a story that appeals to the educated, not superstition and mythology. With the death of religion imminent, we need to look elsewhere for meaning and purpose.

Fortunately such a narrative already exists. It is the story of cosmic evolution, the story of the cosmos becoming self-conscious. Nature gave birth to consciousness, and consciousness comes to know nature. Through this interaction of the universe and the minds that emerge from it, reality comes to know itself. Surely this story is profound enough to satisfy our metaphysical longings. And it has an added benefit over mythological accounts—it’s based on science.

What is our role in this story? We are the protagonists of the evolutionary epic; determining its course is our destiny. We should willingly embrace our role as agents of evolutionary change, helping evolution to realize new possibilities. We are not an end, but a beginning. We are as links in a chain leading upward to higher forms of being and consciousness. This is our hope, this gives our lives meaning.

I don’t know if we can make a better future, but I know that no help will come from the gods. Turning our backs on them is a first step on our journey.

###

John G. Messerly, Ph.D taught for many years in both the philosophy and computer science departments at the University of Texas at Austin. His most recent book is The Meaning of Life: Religious, Philosophical, Scientific, and Transhumanist Perspectives. He blogs daily on issues of futurism and the meaning of life at reasonandmeaning.com

The post The End of Religion: Technology and the Future appeared first on h+ Magazine.

]]>
http://hplusmagazine.com/2015/01/26/end-religion-technology-future/feed/ 1
Programmed Death, H. pylori, and the Legacy of George Williams http://hplusmagazine.com/2015/01/26/programmed-death-h-pylori-legacy-george-williams/ http://hplusmagazine.com/2015/01/26/programmed-death-h-pylori-legacy-george-williams/#comments Mon, 26 Jan 2015 18:21:15 +0000 http://hplusmagazine.com/?p=26305 Last month, an NYU Med School doc published an article saying that our gut bacteria might be evolved to promote our digestion when we’re young, but kill us when we’re old.  This sounds like evolutionarily programmed death–just the kind of thesis that I have been promoting for 18 years.  The article was picked up by Scientific American, Psychology Today, Science News, and Eurekalart of the AAAS.  Why am I not cheering? In 1966, a smart, confident young biologist named George Willliams wrote a book that changed the culture and methods of evolutionary biology.  Up until that time, evolutionary biology had been primarily a qualitative, observational science, in the tradition of Darwin.  (There is not a single equation in any of Darwin’s books.)  Practitioners had absorbed the message that natural selection was about gaining an advantage in survival or reproduction, and this was the lens through which they looked at biological function and behaviors.  When they found a subtle or unexpected advantage of this sort, evolutionary biologists were excited to report a new understanding of a phenomenon which perhaps had not made sense hitherto. This was wishy-washy, 19th Century science. But on a parallel track, having almost no communication with the field biologists, there were a handful of evolutionary geeks–scientists who were trained in mathematics or physics, and who took up evolution in the same spirit, using methods borrowed from theoretical physics.  In the early 20th Century there were Alfred Lotka and R. A. Fisher and Sewall Wright, and in the mid-century there was J. B. S. Haldane and Theodosius Dobzhansky (just to pronounce his name makes you fitter) and Wright just kept on keeping on, writing and researching until his death at age 98.  These men had developed a quantitative science of evolution called “the new synthesis” or “neo-Darwinism” or “population genetics”, but their numbers were few and they tended to think abstractly, without a deep knowledge of biology in all its messiness, and so their influence on the larger biological community was quite limited. Williams’s book heralded the merging of these two spheres of evolutionary science. Williams was a biologist through and through, who earned his degree working with real fish in real habitats.  But at a young age, he had also deeply absorbed the methhods and disciplines of the geeks. With a book boldly titled Adaptation and Natural Selection, he challenged his fellow evolutionists to think quantitatively about evolution, and think about the actual mechanisms by which an adaptation might evolve. These were much needed disciplines, and Williams had the commanding, incisive writing style to bring his point home to a community that had become accustomed to field biology as a descriptive science. But these are also difficult disciplines, requiring a fundamental shift in the thought process.  As it happened, Williams’s message in Adaptation was caricatured, and for decades after the publication, it came to be transmitted to students of evolution in the Orwellianform, “Individual selection, good; Group selection, bad.” To be sure, Williams was not innocent of this message; nevertheless, his thinking was a lot more cogent and more comprehensive than the sound bytes that came to be transmitted in his name.  The book was written as an appeal for more rigor in Williams’s colleagues.  The part of his argument that survived concerned only group selection, and it went something like this: Every new trait that appears begins life as a random mutation in one individual.  If the mutation promotes survival or reproduction, it will spread through the population, otherwise it will gradually die out. Suppose a mutation arises that is bad for the individual’s fitness, but offers a benefit to the community.  (This is the definition of evolutionary altruism.) In theory, this trait might help a group to compete successfully against other groups that did not have the trait.  But its first hurdle must be that it must come to dominate the group.  But every individual who carries the trait is at a disadvantage compared to other individuals in the group.  So the new gene faces an uphill battle in its first step, and it is likely never to get to the next step, where it can show its stuff in group-against-group competition. Therefore altruistic traits are unlikely to survive natural selection. (not a quote–this is my paraphrase) This is a well-reasoned argument, worthy of attention and consideration, but it is not the end of the story.  The argument is rooted in the mechanism of the Selfish Gene Model, which assumes that evolution works one-gene-at-a-time, genes contribute independently to fitness, populations are thoroughly mixed (random mating) genes rise and fall in a population quickly, while the population size (and everything else about the environment) changes slowly When we write these assumptions out explicitly, it’s clear that sometimes they are true, and sometimes not.  Sometimes the Selfish Gene model does fine, and sometimes a more complex and sophisticated model is called for. If we take to heart Williams’s deeper message, then when we find traits in nature that look like altruism, we should consider all mechanisms by which it might have evolved, evaluate them quantitatively, and decide on the most plausible evolutionary explanation.   If things had developed as they should… It was right, of course, that the field biologists and the evolution geeks should talk to each other.  They purported to be studying the same subject.  But what should have happened (in my couterfactual history) when the biologists first compared notes with the mathematicians is that the biologists should have had the upper hand.  “Here’s where your theory works–here’s where it doesn’t.  Go back to the drawing board and give us a more complete theory that explains what we see.”  We might have arrived at an understanding of the usefulness of the Selfish Gene, and also its limitations.  Whenever there is a conflict between theory and observation, it is theory that must bend. But in real life, this is not what happened.  In real life, the mathematicians were smart and brash, and the field biologists were more tentative, nuanced in their understanding, a little embarrassed about the contradictions in their findings, and easily intimidated by formulas.  All too often, the mathematicians simply told the field biologists they were wrong, that they were not seeing what they thought they were seeing, that theory forbids it. The theory they invoked was the simplistic Selfish Gene theory, because this was a time before computer models, before the ideas of evolutionary ecology had a platform for development.  Theorists did not keep in mind that the Selfish Gene model was just a model, and they insisted that it must explain everything. The message of George Williams survived mostly in its caricatured form.  The rigor he required proved to be too demanding for biologists, and two generations of evolutionists took the shortcut of affording credence to any explanation that looked to be based on individual benefit, and dismissed any explanation based in group selection. So this became the era of the Selfish Gene, which is just now winding down, and an appreciation of Williams’s true message is finally spreading through the community.   Gut Bacteria and Programmed Death–What does the article say? We circle back now and consider the article by Martin Blaser, microbiologist at NYU, andGlenn Webb, who does computer modeling of biological populations at Vanderbilt.  The failed revolution of George Williams sets a stage for all that is effective and all that is missing in Blaser & Webb’s analysis. The age structure of human populations is exceptional among animal species. Unlike with most species, human juvenility is extremely extended, and death is not coincident with the end of the reproductive period. We examine the age structure of early humans with models that reveal an extraordinary balance of human fertility and mortality. We hypothesize that the age structure of early humans was maintained by mechanisms incorporating the programmed death of senescent individuals, including by means of interactions with their indigenous microorganisms. First, before and during reproductive life, there was selection for microbes that preserve host function through regulation of energy homeostasis, promotion of fecundity, and defense against competing high-grade pathogens. Second, we hypothesize that after reproductive life, there was selection for organisms that contribute to host demise. While deleterious to the individual, the presence of such interplay may be salutary for the overall host population in terms of resource utilization, resistance to periodic diminutions in the food supply, and epidemics due to high-grade pathogens. In particular, they cite H. pylori (famous for its connection to ulcers) as example of a bacterial strain that promotes digestion and good health for young humans, but that can lead to cancer of the stomach or esophagus late in life.  Perhaps this bacteria serves the function of removing from the population older individuals who are no longer fertile, so that they are consuming scarce food, though they are unable to contribute to the reproductive rate that keeps the community viable. I find the thesis unclear on several different levels. What is evolving here, the genome of the bacteria or of the humans? What is the mechanism by which intestinal bacteria spread through a community? What measure of fitness is applied in the model, and is it the fitness of the bacteria or of the humans? To the credit of these authors, they do not categorically dismiss group selection or programmed aging from consideration.  In fact, they claim in the paper’s Introduction to explicitly consider both.  But I’ve had a devil of a time trying to figure out how their model works, let alone how it answers the standard broadsides against group selection and programmed aging.  I find myself agreeing with Williams in his appeal for clarity and rigor.   Cui bono? “Who profits?”  The title of the paper is Host Demise as a Beneficial Function of Indigenous Microbiota in Human Hosts, but whose benefit are they talking about?  Certainly not the individual human who dies or esophageal cancer, nor the gut bacteria that die with him and relinquish their opportunity to be transmitted human-to-human. The root of the unclarity is that our understanding of the relationship between gut bacteria and the host human is still quite hazy.  There is enormous variability from one person to the next in the type and variety of gut bacteria.  There are benefits to the host human from some combinations, and diseases that come from others.  But it seems that there is no one optimal bacterial community that is right for everyone, but rather that the interactions among an individual’s metabolism, his environment, his diet, and his bacterial community form a complex system. Do the bacteria in our guts serve at the pleasure of the host (that’s us), or are they opportunistic invaders?  These are major open questions in the field of microbiology. The answer seems to be “neither”, and the ecology of each person’s bacterial community has an integrity and a logic all its own. Mammals harbour a complex gut microbiome, comprising bacteria that confer immunological, metabolic and neurological benefits. Despite advances in sequence-based microbial profiling and myriad studies defining microbiome composition during health and disease, little is known about the molecular processes used by symbiotic bacteria to stably colonize the gastrointestinal tract…the gut normally contains hundreds of bacterial species… [Ref] It has been a central thesis of my own work that entire ecosystems evolve together.  The present example is as strong a case as I can imagine.  Each bacterial species much be able to hold its own in the gut ecosystem, and the entire colony must serve the digestive needs of the individual human host, because if the host dies he takes the bacterial colony with him. There is a great deal of selfishness and a great deal of cooperation involved in this dynamic, in a mixture that cannot easily be disentangled.  Suppose it is in the interest of the human community to eliminate its non-reproducing elders, but if the H pylori kill their host, then they are missing out on the opportunity to continue spreading from this one individual to others, perhaps younger family members who are sharing food.  I look for this issue to be discussed in the paper, and I don’t find it. Indigenous microbial populations that contribute to the health not only of the individual but also of the host group will be most strongly selected…If indigenous organisms contribute to programmed host death in senescent individuals but not to the death of reproductively active individuals, there may be selection for their maintenance. Yes, “there may be selection”…but at what level, and by what mechanism?  My guess is that indivdual selection for each bacterial strain and for the human host are all important, and that the communal function of the bacterial colony is also essential, as is the welfare of the human community, and all combinations of interactions among these. Leslie Matrices This is simply a linear model for projecting the population age distribution from one year to the next.  If you have this many 1-year-olds this year, then next year, this many of them will die, and the rest will be 2-year-olds.  If you have this many 2-year-olds…, etc.  If you have this many women of child-bearing age in the population, then this is how many newborns you can expect in the coming year… This is a standard model for calculating population dynamics, (introduced by Patrick Leslie, 1945).  It is the model used by Glenn Webb in the current paper.  But missing from this model is the dynamic of population overshoot, followed by famine.  This is at once the gravest and most ordinary danger to any population, and the one that (in my view) aging was evolved to defend against.  So I worry that Blaser and Webb have left out something important.   Loss of Fertility and Loss of Life   (or reproductive senescence and mortality acceleration) In many animals (including us) there are two independent aspects of aging: loss of fertility, leading to sterility loss of strength, viability and robustness, leading to death It is a prediction of classical evolutionary theory that these two should occur at the same time.  Why should an individual go on living after it is no longer able to reproduce?  It can be of no use either to itself or its community.  But in the biosophere, we find this prediction is routinely violated. Of course, human females undergo menopause, and can live for decades thereafter. For theoretical reasons, post-reproductive life span has been thought to be unique to humans, or perhaps a few other social mammals that take care of their grandchildren. But field studies show that this isn’t true.  In fact post-reproductive life is widespread in nature–perhaps it is the rule rather than the exception.  C elegans worms don’t take care of their grandchildren, nor do quails or yeast cells; yet all of these have been observed to outlive their fertility.  Add whales, elephants, opossums, parakeets and guppies to the list as well. How to understand post-reproductive life span is a topic for another week, but Charles Goodnight and I have written about the subject (in journalese) here.   Precedents missed In addition to this, there is another paper I suggest that Blaser and Webb might have benefited from assimilating. Once fertility has ended, there is a natural selection to kill the non-reproducing individual, because it is consuming food, taking up space in the niche without contributing to sustaining the community.  This idea goes back to Weismann 120 years ago, but in modern times it has been modeled and explained most thoroughly by my colleague, Justin Travis.  Curiously, there is also a communal reason to keep the post-reproductive members alive for awhile, in a weakened state, assuming that community has excess reproductive capacity.  The post-reproductive segment of the population serves as a buffer to prevent population overshoot and whip-sawing that can lead to extinction.  They consumes food when there is plenty, and thus they help keep the population from growing too fast.  Then, when food is scarce, they are the first to die because they are old and weak.  This is the idea that Goodnight and I modeled and published (2012).   The bottom line For me, the upshot is that I don’t understand enough of how their model works to judge whether they are on to something.  I’m tempted to add that computer modeling of aging is my specialty field, and if I can’t understand their model, for whom are they writing? In controversial fields like this, where there has been so much confusion and misunderstanding, it behooves us all to be extra careful to follow Williams’s directives: think (and write) in terms of explicit mechanisms of heredity and selection. Most readers of this work are already pre-disposed to pre-emptively dismiss the ideas of programmed aging and group selection.  Let’s not give them an excuse to do that. ### This article originally appeared in Josh’s blog Aging Matters here.

The post Programmed Death, H. pylori, and the Legacy of George Williams appeared first on h+ Magazine.

]]>

George_C._Williams

Last month, an NYU Med School doc published an article saying that our gut bacteria might be evolved to promote our digestion when we’re young, but kill us when we’re old.  This sounds like evolutionarily programmed death–just the kind of thesis that I have been promoting for 18 years.  The article was picked up by Scientific AmericanPsychology Today, Science News, and Eurekalart of the AAAS.  Why am I not cheering?


In 1966, a smart, confident young biologist named George Willliams wrote a book that changed the culture and methods of evolutionary biology.  Up until that time, evolutionary biology had been primarily a qualitative, observational science, in the tradition of Darwin.  (There is not a single equation in any of Darwin’s books.)  Practitioners had absorbed the message that natural selection was about gaining an advantage in survival or reproduction, and this was the lens through which they looked at biological function and behaviors.  When they found a subtle or unexpected advantage of this sort, evolutionary biologists were excited to report a new understanding of a phenomenon which perhaps had not made sense hitherto. This was wishy-washy, 19th Century science.

But on a parallel track, having almost no communication with the field biologists, there were a handful of evolutionary geeks–scientists who were trained in mathematics or physics, and who took up evolution in the same spirit, using methods borrowed from theoretical physics.  In the early 20th Century there were Alfred Lotka and R. A. Fisher and Sewall Wright, and in the mid-century there was J. B. S. Haldane and Theodosius Dobzhansky (just to pronounce his name makes you fitter) and Wright just kept on keeping on, writing and researching until his death at age 98.  These men had developed a quantitative science of evolution called “the new synthesis” or “neo-Darwinism” or “population genetics”, but their numbers were few and they tended to think abstractly, without a deep knowledge of biology in all its messiness, and so their influence on the larger biological community was quite limited.

Williams’s book heralded the merging of these two spheres of evolutionary science.

Williams was a biologist through and through, who earned his degree working with real fish in real habitats.  But at a young age, he had also deeply absorbed the methhods and disciplines of the geeks. With a book boldly titled Adaptation and Natural Selection, he challenged his fellow evolutionists to

  • think quantitatively about evolution, and
  • think about the actual mechanisms by which an adaptation might evolve.

These were much needed disciplines, and Williams had the commanding, incisive writing style to bring his point home to a community that had become accustomed to field biology as a descriptive science.

But these are also difficult disciplines, requiring a fundamental shift in the thought process.  As it happened, Williams’s message in Adaptation was caricatured, and for decades after the publication, it came to be transmitted to students of evolution in the Orwellianform,

“Individual selection, good; Group selection, bad.”

To be sure, Williams was not innocent of this message; nevertheless, his thinking was a lot more cogent and more comprehensive than the sound bytes that came to be transmitted in his name.  The book was written as an appeal for more rigor in Williams’s colleagues.  The part of his argument that survived concerned only group selection, and it went something like this:

Every new trait that appears begins life as a random mutation in one individual.  If the mutation promotes survival or reproduction, it will spread through the population, otherwise it will gradually die out.

Suppose a mutation arises that is bad for the individual’s fitness, but offers a benefit to the community.  (This is the definition of evolutionary altruism.) In theory, this trait might help a group to compete successfully against other groups that did not have the trait.  But its first hurdle must be that it must come to dominate the group.  But every individual who carries the trait is at a disadvantage compared to other individuals in the group.  So the new gene faces an uphill battle in its first step, and it is likely never to get to the next step, where it can show its stuff in group-against-group competition.

Therefore altruistic traits are unlikely to survive natural selection.

(not a quote–this is my paraphrase)

This is a well-reasoned argument, worthy of attention and consideration, but it is not the end of the story.  The argument is rooted in the mechanism of the Selfish Gene Model, which assumes that

  • evolution works one-gene-at-a-time,
  • genes contribute independently to fitness,
  • populations are thoroughly mixed (random mating)
  • genes rise and fall in a population quickly, while the population size (and everything else about the environment) changes slowly

When we write these assumptions out explicitly, it’s clear that sometimes they are true, and sometimes not.  Sometimes the Selfish Gene model does fine, and sometimes a more complex and sophisticated model is called for.

If we take to heart Williams’s deeper message, then when we find traits in nature that look like altruism, we should consider all mechanisms by which it might have evolved, evaluate them quantitatively, and decide on the most plausible evolutionary explanation.

 

If things had developed as they should…

It was right, of course, that the field biologists and the evolution geeks should talk to each other.  They purported to be studying the same subject.  But what should have happened (in my couterfactual history) when the biologists first compared notes with the mathematicians is that the biologists should have had the upper hand.  “Here’s where your theory works–here’s where it doesn’t.  Go back to the drawing board and give us a more complete theory that explains what we see.”  We might have arrived at an understanding of the usefulness of the Selfish Gene, and also its limitations.  Whenever there is a conflict between theory and observation, it is theory that must bend.

But in real life, this is not what happened.  In real life, the mathematicians were smart and brash, and the field biologists were more tentative, nuanced in their understanding, a little embarrassed about the contradictions in their findings, and easily intimidated by formulas.  All too often, the mathematicians simply told the field biologists they were wrong, that they were not seeing what they thought they were seeing, that theory forbids it.

The theory they invoked was the simplistic Selfish Gene theory, because this was a time before computer models, before the ideas of evolutionary ecology had a platform for development.  Theorists did not keep in mind that the Selfish Gene model was just a model, and they insisted that it must explain everything.

The message of George Williams survived mostly in its caricatured form.  The rigor he required proved to be too demanding for biologists, and two generations of evolutionists took the shortcut of affording credence to any explanation that looked to be based on individual benefit, and dismissed any explanation based in group selection.

So this became the era of the Selfish Gene, which is just now winding down, and an appreciation of Williams’s true message is finally spreading through the community.

 

Gut Bacteria and Programmed Death–What does the article say?

We circle back now and consider the article by Martin Blaser, microbiologist at NYU, andGlenn Webb, who does computer modeling of biological populations at Vanderbilt.  The failed revolution of George Williams sets a stage for all that is effective and all that is missing in Blaser & Webb’s analysis.

The age structure of human populations is exceptional among animal species. Unlike with most species, human juvenility is extremely extended, and death is not coincident with the end of the reproductive period. We examine the age structure of early humans with models that reveal an extraordinary balance of human fertility and mortality. We hypothesize that the age structure of early humans was maintained by mechanisms incorporating the programmed death of senescent individuals, including by means of interactions with their indigenous microorganisms. First, before and during reproductive life, there was selection for microbes that preserve host function through regulation of energy homeostasis, promotion of fecundity, and defense against competing high-grade pathogens. Second, we hypothesize that after reproductive life, there was selection for organisms that contribute to host demise. While deleterious to the individual, the presence of such interplay may be salutary for the overall host population in terms of resource utilization, resistance to periodic diminutions in the food supply, and epidemics due to high-grade pathogens.

In particular, they cite H. pylori (famous for its connection to ulcers) as example of a bacterial strain that promotes digestion and good health for young humans, but that can lead to cancer of the stomach or esophagus late in life.  Perhaps this bacteria serves the function of removing from the population older individuals who are no longer fertile, so that they are consuming scarce food, though they are unable to contribute to the reproductive rate that keeps the community viable.

I find the thesis unclear on several different levels.

  • What is evolving here, the genome of the bacteria or of the humans?
  • What is the mechanism by which intestinal bacteria spread through a community?
  • What measure of fitness is applied in the model, and is it the fitness of the bacteria or of the humans?

To the credit of these authors, they do not categorically dismiss group selection or programmed aging from consideration.  In fact, they claim in the paper’s Introduction to explicitly consider both.  But I’ve had a devil of a time trying to figure out how their model works, let alone how it answers the standard broadsides against group selection and programmed aging.  I find myself agreeing with Williams in his appeal for clarity and rigor.

 

Cui bono?

“Who profits?”  The title of the paper is Host Demise as a Beneficial Function of Indigenous Microbiota in Human Hosts, but whose benefit are they talking about?  Certainly not the individual human who dies or esophageal cancer, nor the gut bacteria that die with him and relinquish their opportunity to be transmitted human-to-human.

The root of the unclarity is that our understanding of the relationship between gut bacteria and the host human is still quite hazy.  There is enormous variability from one person to the next in the type and variety of gut bacteria.  There are benefits to the host human from some combinations, and diseases that come from others.  But it seems that there is no one optimal bacterial community that is right for everyone, but rather that the interactions among an individual’s metabolism, his environment, his diet, and his bacterial community form a complex system.

Do the bacteria in our guts serve at the pleasure of the host (that’s us), or are they opportunistic invaders?  These are major open questions in the field of microbiology. The answer seems to be “neither”, and the ecology of each person’s bacterial community has an integrity and a logic all its own.

Mammals harbour a complex gut microbiome, comprising bacteria that confer immunological, metabolic and neurological benefits. Despite advances in sequence-based microbial profiling and myriad studies defining microbiome composition during health and disease, little is known about the molecular processes used by symbiotic bacteria to stably colonize the gastrointestinal tract…the gut normally contains hundreds of bacterial species… [Ref]

It has been a central thesis of my own work that entire ecosystems evolve together.  The present example is as strong a case as I can imagine.  Each bacterial species much be able to hold its own in the gut ecosystem, and the entire colony must serve the digestive needs of the individual human host, because if the host dies he takes the bacterial colony with him.

There is a great deal of selfishness and a great deal of cooperation involved in this dynamic, in a mixture that cannot easily be disentangled.  Suppose it is in the interest of the human community to eliminate its non-reproducing elders, but if the H pylori kill their host, then they are missing out on the opportunity to continue spreading from this one individual to others, perhaps younger family members who are sharing food.  I look for this issue to be discussed in the paper, and I don’t find it.

Indigenous microbial populations that contribute to the health not only of the individual but also of the host group will be most strongly selected…If indigenous organisms contribute to programmed host death in senescent individuals but not to the death of reproductively active individuals, there may be selection for their maintenance.

Yes, “there may be selection”…but at what level, and by what mechanism?  My guess is that indivdual selection for each bacterial strain and for the human host are all important, and that the communal function of the bacterial colony is also essential, as is the welfare of the human community, and all combinations of interactions among these.

Leslie Matrices

This is simply a linear model for projecting the population age distribution from one year to the next.  If you have this many 1-year-olds this year, then next year, this many of them will die, and the rest will be 2-year-olds.  If you have this many 2-year-olds…, etc.  If you have this many women of child-bearing age in the population, then this is how many newborns you can expect in the coming year…

This is a standard model for calculating population dynamics, (introduced by Patrick Leslie, 1945).  It is the model used by Glenn Webb in the current paper.  But missing from this model is the dynamic of population overshoot, followed by famine.  This is at once the gravest and most ordinary danger to any population, and the one that (in my view) aging was evolved to defend against.  So I worry that Blaser and Webb have left out something important.

 

Loss of Fertility and Loss of Life  
(or reproductive senescence and mortality acceleration)

In many animals (including us) there are two independent aspects of aging:

  • loss of fertility, leading to sterility
  • loss of strength, viability and robustness, leading to death

It is a prediction of classical evolutionary theory that these two should occur at the same time.  Why should an individual go on living after it is no longer able to reproduce?  It can be of no use either to itself or its community.  But in the biosophere, we find this prediction is routinely violated.

Of course, human females undergo menopause, and can live for decades thereafter. For theoretical reasons, post-reproductive life span has been thought to be unique to humans, or perhaps a few other social mammals that take care of their grandchildren.

But field studies show that this isn’t true.  In fact post-reproductive life is widespread in nature–perhaps it is the rule rather than the exception.  C elegans worms don’t take care of their grandchildren, nor do quails or yeast cells; yet all of these have been observed to outlive their fertility.  Add whales, elephants, opossums, parakeets and guppies to the list as well.

How to understand post-reproductive life span is a topic for another week, but Charles Goodnight and I have written about the subject (in journalese) here.

 

Precedents missed

In addition to this, there is another paper I suggest that Blaser and Webb might have benefited from assimilating.

Once fertility has ended, there is a natural selection to kill the non-reproducing individual, because it is consuming food, taking up space in the niche without contributing to sustaining the community.  This idea goes back to Weismann 120 years ago, but in modern times it has been modeled and explained most thoroughly by my colleague, Justin Travis.

 Curiously, there is also a communal reason to keep the post-reproductive members alive for awhile, in a weakened state, assuming that community has excess reproductive capacity.  The post-reproductive segment of the population serves as a buffer to prevent population overshoot and whip-sawing that can lead to extinction.  They consumes food when there is plenty, and thus they help keep the population from growing too fast.  Then, when food is scarce, they are the first to die because they are old and weak.  This is the idea that Goodnight and I modeled and published (2012).

 

The bottom line

For me, the upshot is that I don’t understand enough of how their model works to judge whether they are on to something.  I’m tempted to add that computer modeling of aging is my specialty field, and if I can’t understand their model, for whom are they writing?

In controversial fields like this, where there has been so much confusion and misunderstanding, it behooves us all to be extra careful to follow Williams’s directives: think (and write) in terms of explicit mechanisms of heredity and selection.

Most readers of this work are already pre-disposed to pre-emptively dismiss the ideas of programmed aging and group selection.  Let’s not give them an excuse to do that.

###

This article originally appeared in Josh’s blog Aging Matters here.

The post Programmed Death, H. pylori, and the Legacy of George Williams appeared first on h+ Magazine.

]]>
http://hplusmagazine.com/2015/01/26/programmed-death-h-pylori-legacy-george-williams/feed/ 0
Just How Efficient Can a Jupiter Brain Be? http://hplusmagazine.com/2015/01/26/just-efficient-can-jupiter-brain/ http://hplusmagazine.com/2015/01/26/just-efficient-can-jupiter-brain/#comments Mon, 26 Jan 2015 15:38:34 +0000 http://hplusmagazine.com/?p=26279 Large information processing objects have some serious limitations due to signal delays and heat production.

The post Just How Efficient Can a Jupiter Brain Be? appeared first on h+ Magazine.

]]>

Large information processing objects have some serious limitations due to signal delays and heat production.

hypernode

Latency

XIX: The Dyson SunConsider a spherical “Jupiter-brain” of radius R. It will take maximally 2R/c seconds to signal across it, and the average time between two random points (selected uniformly) will be 36R/35 c.

Whether this is too much depends on the requirements of the system. Typically the relevant question is if the transmission latency L is long compared to the processing time t of the local processing. In the case of the human brain delays range between a few milliseconds up to 100 milliseconds, and neurons have typical frequencies up to maximally 100 Hz. The ratio L/t between transmission time and a “processing cycle” will hence be between 0.1-10, i.e. not far from unity. In a microprocessor the processing time is on the order of 10^{-9} s and delays across the chip (assuming 10% c signals) \approx 3\cdot 10^{-10} s, L/t\approx 0.3.

If signals move at lightspeed and the system needs to maintain a ratio close to unity, then the maximal size will be R < tc/2 (or tc/4 if information must also be sent back after a request). For nanosecond cycles this is on the order of centimeters, for femtosecond cycles 0.1 microns; conversely, for a planet-sized system (R=6000 km) t=0.04 s, 25 Hz.

latency

The cycle size is itself bounded by lightspeed: a computational element such as a transistor needs to have a radius smaller than the time it takes to signal across it, otherwise it would not function as a unitary element. Hence it must be of size r < c t or, conversely, the cycle time must be slower than r/c seconds. If a unit volume performs C computations per second close to this limit, C=(c/r)(1/r)^3, or C=c/r^4.

(More elaborate analysis can deal with quantum limitations to processing, but this post will be classical.)

This does not mean larger systems are impossible, merely that the latency will be long compared to local processing (compare the Web). It is possible to split the larger system into a hierarchy of subsystems that are internally synchronized and communicate on slower timescales to form a unified larger system. It is sometimes claimed that very fast solid state civilizations will be uninterested in the outside world since it both moves immeasurably slowly and any interaction will take a long time as measured inside the fast civilization. However, such hierarchical arrangements may be both very large and arbitrarily slow: the civilization as a whole may find the universe moving at a convenient speed, despite individual members finding it frozen.

Waste heat dissipation

Information processing leads to waste heat production at some rate P Watts per cubic meter.

Passive cooling

If the system just cools by blackbody radiation, the maximal radius for a given maximal temperature T is

R = \frac{3 \sigma T^4}{P}

where \sigma \approx 5.670\cdot 10^{-8} is the Stefan–Boltzmann constant. This assumes heat is efficiently distributed in the interior.

If it does C computations per volume per second, the total computations are 4 \pi R^3 C / 3=108 \pi \sigma^3 T^{12} C /P^3  – it really pays off being able to run it hot!

Still, molecular matter will melt above 3600 K, giving a max radius of around 29,000/P km. Current CPUs have power densities somewhat below 100 Watts per cm^2; if we assume 100 W per cubic centimetre P=10^8 and R < 29 cm! If we assume a power dissipation similar to human brains P=1.43\cdot 10^4 the the max size becomes 2 km. Clearly the average power density needs to be very low to motivate a large system.

sizeUsing quantum dot logic gives a power dissipation of size W/m^3 and a radius of 470 meters. However, by slowing down operations by a factor \sqrt{f} the energy needs decrease by the factor f. A reduction of speed to 3% gives a reduction of dissipation by a factor 10^{-3}, enabling a 470 kilometre system. Since the total computations per second for the whole system scales with the size as R^3 \sqrt{f} = \sqrt{f}/P^3 = f^{-2.5} slow reversible computing produces more computations per second in total than hotter computing. The slower clockspeed also makes it easier to maintain unitary subsystems. The maximal size of each such system scales as r=1/\sqrt{f}, and the total amount of computation inside them scales as r^3=f^{-1.5}. In the total system the number of subsystems change as (R/r)^3 = f^{-3/2}: although they get larger, the whole system grows even faster and becomes less unified.

The limit of heat emissions is set by the Landauer principle: we need to pay at least k_B T\ln(2) Joules for each erased bit. So I the number of bit erasures per second and cubic meter will be less than P/k_B T\ln(2). To get a planet-sized system P will be around 1-10 W, implying I < 6.7\cdot 10^{19-20} for a hot 3600 K system, and I < 8.0\cdot 10^{22-23} for a cold 3 K system.

Active cooling

Passive cooling just uses the surface area of the system to radiate away heat to space. But we can pump coolants from the interior to the surface, and we can use heat radiators much larger than the surface area. This is especially effective for low temperatures, where radiation cooling is very weak and heat flows normally gentle (remember, they are driven by temperature differences: not much room for big differences when everything is close to 0 K).

If we have a sphere with radius R with internal volume V(R) of heat-emitting computronium, the surface must have PV(R)/X area devoted to cooling pipes to get rid of the heat, where X is the amount of Watts of heat that can b carried away by a square meter of piping. This can be formulated as the differential equation:

V'(R)= 4\pi R^2 - PV(R)/X
The solution is
V(R)=4 \pi ( (P/X)^2R^2 - 2 (P/X) R - 2 \exp(-(P/X)R) + 2) (X^3/P^3)

This grows as R^2 for larger R. The average computronium density across the system falls as 1/R as the system becomes larger.

If we go for a cooling substance with great heat capacity per mass at 25 degrees C, hydrogen has 14.30 J/g/K. But in terms of volume water is better at 4.2 J/cm^3/K. However, near absolute zero heat capacities drop down towards zero and there are few choices of fluids.

One neat possibility is superfluid cooling. They carry no thermal energy – they can however transport heat by being converted into normal fluid and have a frictionless countercurrent bringing back superfluid from the cold end. The rate is limited by the viscosity of the normal fluid, and apparently there are critical velocities of the order of mm/s. A CERN paper gives the formula Q=[A \rho_n / \rho_s^3 S^4 T^3 \Delta T ]^{1/3} for the heat transport rate per square meter, where A is 800 ms/kg at 1.8K, \rho_n is the density of normal fluid, \rho_s the superfluid, S is the entropy per unit mass. Looking at it as a technical coolant gives a steady state heat flux along a pipe around 1.2 W/cm^2 in a 1 meter pipe for a 1.9-1.8K difference in temperature. There are various nonlinearities and limitations due to the need to keep things below the lambda point. Overall, this produces a heat transfer coefficient of about 1.2\cdot 10^{4} , in line with the range 10,000-100,000 W/m^2/K found in forced convection (liquid metals have maximal transfer ability).

So if we assume about 1 K temperature difference, then for quantum dots at full speed P/X=61787/10^5=0.61787 we have a computational volume for a one km system 7.7 million cubic meters of computronium, or about 0.001 of the total volume.

Slowing it down to 3% (reducing emissions by 1000) boosts the density to 86%. At this intensity a 1000 km system would look the same as the previous low-density one.

Conclusion

If the figure of merit is just computational capacity, then obviously a larger computer is always better.

But if it matters that parts stay synchronized, then there is a size limit set by lightspeed. Smaller components are better in this analysis, which leaves out issues of error correction – below a certain size level thermal noise, quantum tunneling and cosmic rays will start to induce errors. Handling high temperatures well pays off enormously for a computer not limited by synchronization or latency in terms of computational power; after that, reducing volume heat production has a higher influence on total computation than actual computation density.

Active cooling is better than passive cooling, but the cost is wasted volume, which means longer signal delays. In the above model there is more computronium at the centre than at the periphery, somewhat ameliorating the effect (the mean distance is just 0.03R). However, this ignores the key issue of wiring, which is likely to be significant if everything needs to be connected to everything else.

In short, building a Jupiter-sized computer is tough.

Asteroid-sized ones are far easier. If we ever find or build planet-sized systems they will either be reversible computing, or mostly passive storage rather than processing. Processors by their nature tend to be hot and small.

###

This article originally appeared here. Graphs by Peter Rothman. Jupiter brain image from Orion’s Arm.

The post Just How Efficient Can a Jupiter Brain Be? appeared first on h+ Magazine.

]]>
http://hplusmagazine.com/2015/01/26/just-efficient-can-jupiter-brain/feed/ 0
Video Friday: The Structure of the Neocortex — Dr. Clay Reid at the Allen Institute for Brain Science http://hplusmagazine.com/2015/01/23/video-friday-structure-neocortex-dr-clay-reid-allen-institute-brain-science/ http://hplusmagazine.com/2015/01/23/video-friday-structure-neocortex-dr-clay-reid-allen-institute-brain-science/#comments Fri, 23 Jan 2015 20:55:20 +0000 http://hplusmagazine.com/?p=26287 The neocortex is the part of the brain we use for thinking. Just how does it work?

The post Video Friday: The Structure of the Neocortex — Dr. Clay Reid at the Allen Institute for Brain Science appeared first on h+ Magazine.

]]>

Screen Shot 2015-01-23 at 12.48.53 PM

Screen Shot 2015-01-23 at 12.51.20 PMIn an overview of the structure of the mammalian neocortex, Dr. Clay Reid explains how the mammalian cortex is organized in a hierarchy, describing the columnar principle and canonical microcircuits. This full-length,

undergraduate-level lecture is the third of a 12-part series entitled Coding & Vision 101, produced by the Allen Institute for Brain Science as an educational resource for the community.

 

 

 

The post Video Friday: The Structure of the Neocortex — Dr. Clay Reid at the Allen Institute for Brain Science appeared first on h+ Magazine.

]]>
http://hplusmagazine.com/2015/01/23/video-friday-structure-neocortex-dr-clay-reid-allen-institute-brain-science/feed/ 0