• Uncategorized
  • 45

Graphene is Next

Image Courtesy of: condmat.physics.manchester.ac.uk/pictures/

Graphene. If you’ve never heard about it, don’t worry, a lot of people haven’t, because it’s really only been “discovered” relatively recently, and most of the truly interesting news about it has been in the last year. The amazing thing is that we’ve actually been using it for centuries, in the form of the common pencil. Graphene is a form of carbon, much like carbon nanotubes and other fullerenes, with one major difference. While fullerenes are 3D structures of carbon atoms, graphene is a flat sheet. It’s a 2D lattice of carbon with bonds as strong as diamond. It’s this sheetlike nature that makes it so useful in a pencil. As you write, individual planes of graphite are sheared off the end and deposited on the paper. Those individual planes are pure graphene.

By now, most of you are familiar with carbon nanotubes, a.k.a. CNTs, and their potential for computers. Graphene has equally amazing properties, including some that might make it far more readily usable than CNTs. First, like CNTs, graphene is capable of conducting electricity with much less resistance than copper. That alone makes it useful, but graphene has even more interesting properties. As New Scientist reports bending graphene creates strains between atoms that can create isolated pathways which then act as nanoribbons — wires — within the still connected sheet. In other words, the morphology of graphene affects its electrical properties: change the flat sheet by bending parts of it, and you change how electricity flows through it.

But that isn’t all. The pattern of carbon bonds has effects as well. Graphene is a hexagonal grid of carbon, much like a roll of chicken wire. Remove one random atom from the pattern every so often, and graphene can exhibit magnetic behavior without needing the presence of magnetic metals. Adding hydrogen into the mix creates graphene’s non-conductive cousin, graphane. Taking precisely defined patterns of atoms out of the sheet can create well-defined circuits, creating wires that are almost superconducting.

Image Courtesy of: en.wikipedia.org/wiki/GrapheneAll of these properties make graphene a very important material for the future of electronics. It has already been used to create field effect transistors, the primary component of a computer processor. When you combine this with the other features above, you have a single material that could be used for the majority of the components in every electronic device we currently have… with one major difference: speed. Current silicon based chips have a limited speed at which they can run at room temperature without overheating and malfunctioning. Go much over 3GHz without some major cooling and chips melt down. But replace those chips with graphene equivalents — without having made any other changes to the circuits -—and you can raise that limit much higher. Potentially 100 to 1000 times higher.   

Let’s think about that for a moment. That’s 300GHz to 3000GHz or 3Terahertz.

That’s a jump of two or three orders of magnitude up the exponential curve, my friends, especially when you combine it with the advances in multi-core technology and parallel computing. We’re talking about that smartphone in your pocket having a thousand times the computing power of your desktop PC, but using no more power than it does right now. The resistance of graphene at room temperature is so much lower than copper and silicon that even though it’s running at 1000 times the speed, it’s not using any more current, or wasting any more energy as heat than an identical silicon device, and that’s without considering any other possible advances in the field of electronics design.

We’re talking about that smartphone in your pocket having a thousand times the computing power of your desktop PC, but using no more power.

That big a leap in processing speed will simplify a lot of extremely complex tasks that require extensive amounts of data. From SETI searches for extraterrestrial intelligence to the search for all the ways a protein can fold, scientists use millions of processors in parallel to speed up research. A thousand-fold increase in computer speed could cut months to years off the time needed for their projects. The same goes for DNA sequencing, data mining, and a host of other areas.

And science will not be the sole benefactor. Most smartphones these days have the ability to use their cameras to create virtual overlays on the images that they see, a technique called Augmented Reality. AR has advanced to the point that it’s possible to create virtual characters in photos on your phone using nothing more than a 2D patterned target on the ground, or to create interactive “virtual assistants” in projected video that are capable of interacting with real world objects. Ultrafast computers will be essential for ushering in the age of Virtual Reality.

Image Courtesy of: en.wikipedia.org/wiki/GrapheneA massive increase in computer speeds is likely to benefit other complex computing tasks as well, such as real-time speech language translation. Right now, it is difficult to make these programs run quickly enough to be useful. A thousand-fold increase in computer speed could make brute force approaches a practical solution, enabling computers to crunch through entire dictionaries in milliseconds. It could make possible the elusive conversational interface that so many people believe will be the next step in operating systems. That speed will also be useful in the next generation of robotics, quite possibly bringing us a step closer to the kind of robots seen in movies like I, Robot or Star Wars. Ultrafast computers would enable a major reduction in the size of the computers needed to run some of the most complex robots we currently have, bringing the day of Rosie the Robot maid that much closer.

Obviously, ultrafast computers are going to have a very far-reaching effect on the way we do things, as well as how we interact with each other and our world, so the real questions are how practical is it to make graphene chips, and how soon can they be made? The answer is probably going to surprise you. Graphene has already been proven to be usable in current chip manufacturing processes with only minimal retooling needed. In fact, IBM has already created working 30GHz test devices using graphene transistors. In other words, graphene could begin making its way into computers as early as 2012 to 2015, and almost certainly by 2020.

Graphene, that same single-atom-thick layer of carbon that is a part of every pencil mark, is going to make all of this possible. Not bad for the humble Number 2, huh?

You may also like...

45 Responses

  1. Aedan says:

    Are there any public shareholding companies that are presently researching/beginning application of Graphene?

  2. Valkyrie Ice says:

    And yet more Graphene news:

    http://nextbigfuture.com/2010/05/graphene-can-have-quantum-dots.html

    [quote] Quantum dots are crystalline molecules from a few to many atoms in size that interact with light and magnetic fields in unique ways. The size of a dot determines its band gap – the amount of energy needed to close the circuit – and makes it tunable to a precise degree. The frequencies of light and energy released by activated dots make them particularly useful for chemical sensors, solar cells, medical imaging and nanoscale circuitry.

    Singh and Penev calculated that removing islands of hydrogen from both sides of a graphane matrix leaves a well with all the properties of quantum dots, which may also be useful in creating arrays of dots for many applications.

    Their work revealed several interesting characteristics. They found that when chunks of the hydrogen sublattice are removed, the area left behind is always hexagonal, with a sharp interface between the graphene and graphane. This is important, they said, because it means each dot is highly contained; calculations show very little leakage of charge into the graphane host material[/quote]

    Now, if you are a reader of science fiction, in particular that of Wil McCarthy, you will have read about a substance called WELLSTONE. Also called Claytronics as well as Programmable Matter.

    Need I go on?

    Now there are far more applications that will be exploited much sooner than the possibilities of wellstone, such as quantum dot transitors, LEDs etc. So imagine a carbon display with pixels smaller than the rods and cones in your eyes built into a contact lens.

    Such an amazingly useful material carbon is. Graphane is Graphene with a layer of hydrogen bonded to each side. If you look closely at the top picture in the article, I do believe that it actually shows GRAPHANE (i.e. the little blue balls are hydrogen atoms.) This process creates hexagonal “wells” of conductive graphene (C no H) isolated from other wells via nonconductive Graphane (C with H). Combine this with Graphene’s other properties, and you can see where it could enable some amazing possibilities.

  3. Valkyrie Ice says:

    And for an update:

    http://nextbigfuture.com/2010/05/first-observation-of-plasmarons-in.html

    *****************************************

    At Berkeley Lab’s Advanced Light Source, scientists working with graphene have made the first observation of the energy bands of complex particles known as plasmarons. Their discovery may hasten the day when graphene can be used to build ultrafast computers and other electronic, photonic, and plasmonic devices on the nanoscale. Understanding the relationships among these three kinds of particles—charge carriers, plasmons, and plasmarons—may hasten the day when graphene can be used for “plasmonics” to build ultrafast computers—perhaps even room-temperature quantum computers—plus a wide range of other tools and applications.

    “The interesting properties of graphene are all collective phenomena,” says Rotenberg, an ALS senior staff scientist responsible for the scientific program at ALS beamline 7, where the work was performed. “Graphene’s true electronic structure can’t be understood without understanding the many complex interactions of electrons with other particles.”

    The electric charge carriers in graphene are negative electrons and positive holes, which in turn are affected by plasmons—density oscillations that move like sound waves through the “liquid” of all the electrons in the material. A plasmaron is a composite particle, a charge carrier coupled with a plasmon.

    ******************************************

    Tres interesting No?

    • Valkyrie Ice says:

      Did further research into Plasmons. This is from Wikipedia:

      Plasmons have been considered as a means of transmitting information on computer chips, since plasmons can support much higher frequencies (into the 100 THz range, while conventional wires become very lossy in the tens of GHz). For plasmon-based electronics to be useful, an analog to the transistor, called a plasmonster, must be invented.

      Graphene used conventionally could be 3 to 10 THz. Graphene plasmonic computers could be 300 to 1,000 THz. Graphene plasmonic “wires” could carry data at the same speeds across continents.

  4. Anonymous says:

    But what’s the difference between graphite, graphate and graphene?

    • Valkyrie Ice says:

      Graphene is a single atom thick sheet of carbon. when these sheets stack into a crystal it’s Graphite.

      Graphene = 1 sheet

      Graphite = many sheets stacked

      Graphate googles as a software product.

  5. Anonymous says:

    I’m glad I’m just an ignorant pleb.
    Let the computerists squabble, and the rest of us can just reap the benefits when they come.

  6. Concerned Bystander says:

    But wouldn’t the wood in the pencils get burned at the high temperature computers run at?

  7. Anony-mouse says:

    I doubt this will make the amount of difference everyone is hoping for. The main bottleneck for computing is the transfer of information from a storage device to the processer. If you wonder why your computer is so slow to start up, its because your Hard drive is struggling to keep up with your OS dumping a bunch of files into your temp memory.

    • Valkyrie Ice says:

      http://nextbigfuture.com/2010/04/hp-promises-20-gigabyte-memristor.html

      http://nextbigfuture.com/2010/02/one-terabyte-flash-drive-chip-coming-in.html

      http://nextbigfuture.com/2010/04/nanodots-breakthrough-may-lead-to.html

      http://nextbigfuture.com/2010/01/nanostructures-could-make-rram-memory.html

      http://nextbigfuture.com/2009/12/stackable-nanoionic-memory-return-of.html

      http://nextbigfuture.com/2009/10/nanostructured-nickel-magnesium-oxide.html

      http://nextbigfuture.com/2009/09/tech-roundup-super-high-density-ibm.html

      http://nextbigfuture.com/2009/09/koreans-show-feasibility-of-room.html

      I can keep going as that is just stuff in the last 9 months or so. But it should be pretty obvious that the future of spinning media is extremely limited. Solid state memory devices will replace them within a few years. Memristors will eliminate the need for devices that boot, as even the loss of power won’t erase the data actively being processed. It will resume precisely where it left off when power resumes.

      Additionally, the creation of an internet capable of running at the same speeds of data transmission is also well underway.

      • Thank you for the extensive list of supporting articles about concurrent developments in technology!

        Solid state memory, graphene based systems, and many other new uses of old materials will fundamentally change the way electronics operate and even how we conceive of how electrical power relates to our ability to run data intensive processes. Beyond possible AR applications it will be amazing to see supercomputers that require very little power and work at speeds orders of magnitude past where we are at now.

        A bold prediction I make based on work by Stephen Wolfram
        see: http://www.ted.com/talks/lang/eng/stephen_wolfram_computing_a_theory_of_everything.html

        is that we will soon be able to collect data so ubiquitously that for many applications we will work only at the concept level rather than having much if any involvement with the data. By that I mean we could create a program based in English that is using our abstract terms and selections of what sort of result we think we need to almost instantly run through thousands of formulae and models to select code or math objects that will produce those results. That may or may not make any sense, but as a psychology doctoral student my jargon isn’t so good for some of the technical concepts in my head.

        • Valkyrie Ice says:

          You are basically describing a star trek computer interface. You lay out the abstract high level orders “Computer, I want a program that does a b and c.” and the computer creates a program automatically to fulfill your request.

          Such high level design software is probably several years away still, but yes, ultrafast computers can certainly make such an interface somewhat easier to make.

      • Anonymous says:

        No matter how much faster the media gets, no matter how much faster the CPU gets, there is still a limit. The media can go a decent bit. The CPU…not so much. A 1GHz CPU will pretty much run about 10x faster than a 100MHz CPU. But a 30GHz CPU is _not_ going to run 10x faster than a 3GHz CPU. And a 300GHz CPU will probably not have any noticeable difference from a 30GHz one. At least not without a fundamental shift in how we build PCs.

        You can’t just crank up the clock speed forever and expect that everything will still work. 3GHz is about the limit for that. After that, the speed of light (or more realistically, speed of electricity) starts to play a role. Past 3GHz, you probably won’t be able to fetch data from RAM in one instruction. So you’ll spend some time sitting there waiting for data. As you increase the speed, you spend even more time waiting. And there’s a second barrier there, probably around 30GHz, where you won’t be able to move data from one part of the CPU to another within one clock cycle. The future is not higher clock cycles. Higher clock cycles are already nearly useless. The future is massively parallel computing and quantum computing. And memory, even hard drives, are going to go the same way. No matter how fast your hard drive is, if it’s connected to your CPU with 20cm of cable, it’s going to take around a nanosecond for that data to get there. Which sounds fast…until you realize that’s about 1GHz. Or .5GHz really since the data request has to travel the same distance.

        • You’re absolutely right… if we we’re talking about silicon. Or even if we were talking about computers pushing electrons around in circuit that allows electrons to move in 3 dimensions.

          But… we’re not.

          We’re dealing with an electron constrained to what is either a 2 dimensional “ribbon” or to a 1 dimensional “wire”. In both cases, the electrons travel at far higher speeds than they can in a 3 dimensional conductor, much closer to lightspeed than they can in current designs.. They also are extremely limited in the amount of “crosstalk” that can occur between circuits because of the constraints on electrons. (i.e. much less “quantum tunneling”) Rather than dealing with electrons as a “river” flowing through “pipes”, this is much closer to dealing with electrons as discrete particles. And that’s just for a start. There is also research into using graphene in “plasmonic” cicuits, which ignore particles altogether and deal with electrons as wave functions, which could achieve lightspeed.

          This is a crucial difference, and it makes for a very LARGE difference in how compact a circuit can be.

          Additionally, there are MAJOR changes in how computers will be constructed in the near future, as not only can graphene be used to make processors, they have been demonstrated to be able to form nearly every component used in electronics, from triodes, diodes, capacitors, coils, and even to be able to form magnetic devices. The “Motherboard” as well as the power supply, the HD, and even memory could all be packed onto a single chip, eliminating all that “cable”

          I quite agree that parallel processors will play a massive role, not so much with quantum computers. They are currently at the stage of the old ENIAC, and need a lot more development prior to widespread use, which makes them a somewhat further down the road development than THz computers. I also think that memristors will play a very large role in the near future by eliminating the need for separate “memory devices”. Who will need ram or a HD when the processor IS both at the same time.

          This article was written to show the LOWEST level of possiblity, in an effort to explain the potentials to a non computer technician or expert, and as such I chose to limit it solely to graphene’s potential in electronics, and not delve into alternate computer architectures, manufacturing techniques, or changes in our concepts of “processor, memory, and storage”.

          Are there limits? Of course, but those limits are not those you listed, which hold true only for current silicon based architectures. TBH I think programming will be a bigger limit than the hardware itself.

        • fixitman says:

          See comment below. His point was, that current silicone technologies are limited, yes, but we’ll be moving away from them in the near future. One of the most limiting factors in current technology is the use of digital computing (strings of 1’s and 0’s) rather than the possible ANALOG computing which will be necessary to utilize faster technology. Storing only 1’s and 0’s is very inefficient. a single dot, in digital terms is either, say, black, or white. A single dot, in analog, can hold a plethora of data (analogous to a full spectrum of colors.) This alone would enable us to increase computing speed MILLIONS of times, but would require programming equivalent to that found in the human brain, rather than our primitive digital programming. This is similar to the way digital signal only approximates analog signal, where digital signal is lost where analog signal goes through. (Blatant plug: down with digital television!)

  8. Valkyrie Ice says:

    And further updates:

    http://apl.aip.org/applab/v96/i17/p173104_s1?isAuthorized=no

    A high-performance top-gate graphene field-effect transistor (G-FET) is fabricated, and used for constructing a high efficient frequency doubler. Taking the advantages of the high gate efficiency and low parasitic capacitance of the top-gate device geometry, the gain of the graphene frequency doubler is increased about ten times compared to that of the back-gate G-FET based device. The frequency response of the frequency doubler is also pushed from 10 kHz for a back-gate device to 200 kHz, at which most of the output power is concentrated at the doubled fundamental frequency of 400 kHz.

    IBM recently showed that graphene transistor can operate up to 100 GHz, and the group at Peking University believes that the material may even still operate well in the THz regime.

    * * * * * * * * * * * * * * * * * *

    “Low Parasitic Capacitance” should answer the question posed about it earlier, but there’s another detail made in this paper that is very important.

    “a graphene based frequency doubler can provide more than 90% converting efficiency, while the corresponding value is not larger than 30% for conventional frequency doubler”

    A typical frequency doubler loses 70% of the input signal’s power to heat. Graphene only loses 10%.

    Frequency doublers are extremely useful in both lasers and radio applications.

    http://en.wikipedia.org/wiki/Second-harmonic_generation

  9. M. Report says:

    Now I know I am growing old;I did not think moly-circs were possible
    outside of Weber’s SF stories. 🙂

    More generally, anone who tries to slow, let alone prevent, development
    of new, wealth creating technology is cutting their own throat behind
    their back; Getting really rich really fast is the only hope for the
    global economy.

    • fixitman says:

      “getting rich really fast?” Get real. The only wealth is in having food to eat, a roof over your head to protect you from the elements, and something to wear so that your skin is protected, and most importantly having your health. Many agrarian societies have been MUCH “wealthier” than technological societies simply because there is no debt, and everyone had enough of everything they NEEDED, and there was no overcrowding, because you can’t grow FOOD where PEOPLE are over-concentrated.
      Another point: give everyone in the world a million dollars, and I guarantee you, the cost of everything will jump by several thousand times. A dollar loaf of bread will cost a thousand dollars.
      Your life savings will buy you NOTHING.
      That’s not wealth. That’s stupidity.
      We are poorer now, in our overabundant society, than we were in the society of scarcity in the depression.

  10. subrot0 says:

    Just a scant two hundred or so years ago somebody from patent office was worried that everything that can be invented/discovered had reached its limit and the patent office would have to be shut down.

    This discovery and things coming out of this will undoubtedly shake the world yet again. This is a good thing. More and better supercomputers to find genes to kill cancer, kill bacteria and perhaps ease the aches and pains of old people.

    Of course, more and better supercomputers also means we can get to porn and poker sites faster also.

  11. bja says:

    Great, and I just upgraded my desktop. When does it end?

  12. Anonymous says:

    I look forward to a time when someone can make some super computers using graphene chips, hook a million or so of them together, then turn the Phil Jones and Michael Mann loose and watch how quickly they can produce fraudulent AGW results. The future is going to be very exciting. They will create the hockey stick from h*ll.

Leave a Reply

https://phuonghoangschool.com/wp-includes/nexus-slot/