The major disruptive force of new and refined material -, gene-, bio-, renewable energy- and nano technologies.
Many new and arising disruptive technologies and developments are very welcome, but while a few are questionable, one type in particular may cause serious problems unless it is dealt with and strictly.
According to the World Economic Forum (WEF), we are in a new seasons in every industry, in very country in the world. And of course – as mentioned in the previous parts – most of it is based on technology which is directly or indirectly linked to Moore’s exponential Kaw. Having altered our vision of what is possible and embarked on a process that is both gratifying and terrifying as the period between historic breakthroughs decreases, the list of potential ‘upcoming big things’ grows longer every day.
It is anticipated that these technology categories will be spurring a fast-groing annual multi-trillion euro business within a decade.
While these technologies affect so many poducts and services comprehensively, I would like to focus on three by way of example: namely clean energy/solar power, next generation genomics; and advanced materials science.
I’ll start with an overview of the energy sector in general and then focus on solar power in particular. According to BP’s Energy Outlook 2016, as the world economy expands, more energy will be needed to fuel the higher levels of activity and living standards. Population and income are the key drivers behind the growing demand for energy – the world’s population is projected to increase by around 1.5 billion to reach nearly 8.8 billion people by 2035, while rapid improvements in energy intensity (i.e. the amount of energy used per unit of GDP) mean that energy demand grows far less quickly than global GDP: 34 per cent versus 107 per cent.
Quite interesting to note is that the growth in the global consumption of liquid fuels is driven by transport and industry, with transport accounting for almost two-thirds of the increase. The global vehicle fleet (commercial vehicles and passenger cars) will more than double by 2035, from around 1.2 billion today to 2.4 billion. But while the fuel mix continues to shift, unfortunately, fossil fuels are likely to remain the dominant source of energy powering the world economy by 2035.
Renewables are set to grow rapidly, as their costs continue to fall and the pledges made in Paris regarding climate change support their widespread adoption. The EU continues to lead the way in the use of renewable power, however, in terms of volume growth up to 2035, the EU is surpassed by the US, and China adds more than the EU and US combined.
Today, the global oil and natural gas industry is big money, about a $4 trillion business. This is about to change. The sun delivers around 7,000 times more energy to the earth than we consume today, and the cost of harnessing solar energy is on a Moore’s Law curve, halving every 16-18 months. With the current technology, it would take less than half a percent of the Earth’s land area to meet all energy needs.
Imagine the impact on this world when we have energy which is clean and practically free everywhere – it changes all industries.
According to the latest REN21 global status report, 2015 saw a record worldwide investment and implementation of clean energy such as wind, solar and hydropower ($286bn, approx. 150 Gigawatts, with solar energy accounting for 56 per cent of the total and wind power for 38 per cent) – that’s the largest annual increase ever, and equivalent to Africa’s entire power generating capacity.
For the first time, emerging economies outspent richer nations in the green energy race, with China accounting for a third of the global total. Jamaica, Honduras, Uruguay and Mauritania were among the highest investors, relative to their GDP. In the next 20 years, over 50 percent – theoretically up to 100 percent – of the world’s energy production could be solar. We are coming up to the peak of the solar revolution where the cost of solar cells will plummet, efficiency will rise dramatically, and the incentives for widespread adoption will become predominant. There may also be exciting new alternatives to solar cells made of silicon (for example perovskite –a light-sensitive crystal that has the potential to be more efficient, inexpensive, and versatile than all other existing solar solutions to date.)
Although solar energy currently only accounts for approximately 0.5 per cent of electricity generated, solar energy production is projected to grow globally at 30 per cent per annum. For those who want to do the simple calculations; at a 30 per cent annual growth rate, it looks, theoretically, like this: In five years, we go from 0.5 per cent to 1.9 per cent, in 10 years, we’re at 5.3 percent, in 15 years, we’re at 20 per cent, and in 21 years, we’re at 95 per cent.
But we have to consider the fact that because the sun doesn’t shine for 24 hours straight, as well as the fact that there will be some areas with an overabundance of solar power, it is technically and commercially quite challenging to handle. Unless there are cost-effective ways to store such renewable power and a new infrastructure is implemented to help balance supply and demand across the grid, this will not change much.
Ramez Naam, energy analyst and science fiction author, says, “we are now hitting a crossover point where solar, without subsidies, is starting to beat out all other sources of energy.” According to Naam, progress in technology has caused solar prices to drop two hundred times since the 1970s and five times in the last five years alone.
Outside of the city of Los Angeles, a new solar plant will be built at 3.6 cents per kilowatt-hour, and in Dubai, the lowest bid for a new, unsubsidized solar plant came in at less than 3 cents per kilowatt-hour. “That is a price that five years ago people would have told you is simply impossible to reach. Think about the cost of energy — it fluctuates. But the cost of technology, like the cellphone in your pocket? Those costs only go down. So now we have a technology that produces energy. It just gets cheaper and cheaper and will disrupt everything in its path,” he says.
My next example is next-generation genomics – changing the building blocks of everything. As Jim Snabe from WEF expressed lately, “today we are obsessed with fixing disease with generic therapy. Imagine if we don’t get sick. Imagine we prevent disease because we do DNA analysis. We may even do modifications. We certainly will have sensors, so that we see things and predict things before it’s too late. If we do have to fix a disease, we do it individually because we understand the individual patient’s individual situation. Imagine what that does to healthcare spending, and to quality of life. That is the opportunity that’s right ahead of us.”
In the 1990s, sequencing the human genome was a project equivalent to constructing the Panama Canal – a multi-year endeavour that required an army of workers and steam-powered diggers. A consortium of international scientists spent 13 years and $3 billion to unlock the mysteries of the human blueprint. Since 2014, supercomputer technology is available that can sequence 20,000 genomes a year at a cost of $1,000 each and less, in just a few hours. Interestingly in this case, rapid advances in technology were even able to exceed Moore’s law regarding the speed improvements of gene sequencing.
The rapidly declining cost of gene sequencing is encouraging studies in how genes determine traits or mutate to cause disease. Increasingly affordable genetic sequencing combined with big data analytics will allow interesting and fast diagnosis of medical conditions, pinpointing of targeted cures, and perhaps in the near future even the creation of ‘customised’ organisms, with applications in agriculture, food or medicine.
If you’d like to learn more about this, I’d recommend you read the publications on nature.com about the Encyclopaedia of DNA Elements – a project called ENCODE, launched in 2003, published in 2012, intended as a follow-up to the Human Genome Project, which aims to identify all functional elements in the human genome.
ENCODE is a giant endeavour to catalogue the entire genome and annotate all its components. All genomes, including ours, are strings of code. The code is written in an alphabet of just four letters and it contains the information needed to make the proteins that build our bodies. But just like the letters in a sentence, the individual bits of the code are meaningless on their own. It is just a set of boring letters. But ENCODE gives the letters meaning, bringing them to life to try and to find some understanding. Genome is a great big place and to understand its wide range of biological questions and implications, experiments need to be run on a mega scale for this complexity, using large computer farms and a worldwide consortium of scientists and data analysts.
In each of our cells, the genome is read slightly differently. Different types of cells use different parts of the genome. So, in the beginning, what it is that switches things on and off was a bit of a mystery. What ENCODE does is try and understand some hundred different cell types to begin with, and why it is that, for example, your liver cells are different from your kidney cells. The complexity is enormous. In the very beginning when it became clear that just over one per cent of our genomes count for the actual proteins, some scientists wondered whether the rest is just junk. Not so – it has since been found that every part of the genome is being used.
This leads us to CRISPR, an acronym for gene editing, and an abbreviation for Clustered Regularly Interspaced Short Palindromic Repeats – a technique discovered in 2012 by molecular biologist Professor Jennifer Doudna, whose team at Berkeley, University of California was studying how bacteria defend themselves against viral infection. The natural system they discovered can be used by biologists to make precise changes to any DNA. This technology has the potential to change the lives of everyone and everything on the planet.
Whether in plant, animal or human cells, CRISPR allows one to insert, delete or amend/repair the DNA specifically – very similar to the copy-paste function of a word processor. The use of CRISPR for genome editing was the AAAS’s choice for breakthrough of the year in 2015, and it’s quite likely that this will be one of the 21st century’s breakthrough technologies.
BBC’s Medical correspondent Fergus Walsh’s remarks on how it works may help for a better understanding: “When a bacterium comes under attack it produces a piece of genetic material that matches the genetic sequence of the invading virus. This piece of material in tandem with a key protein called Cas9 can then lock on to the DNA of the virus, break it and disable it. It is so sensitive that scientists can use it to explore the billions of chemical combinations that make up code of the DNA in a cell, and to make a single key change.”
Crucially, it is fast and cheap, and so is accelerating all kinds of research – from the creation of genetically-modified animal models of human disease to the search for DNA mutations that trigger illness or confer protection. In theory, it might be possible to correct the DNA of embryos but it might also be used to add in genetic enhancements, leading to designer babies. No scientist is suggesting – yet – that genetically edited human embryos should be born, but several teams in China have done some basic research, and the UK is the first country to formally approve gene editing in human embryos, for research only.
While in China there is no national religion, Confucian thinking is still dominant. The belief is therefore that you become a person at birth and not before, which is clearly different from the Christian conception. The taboo when it comes researching with embryos in China will therefore be likely to be less problematic than with us.
As a third and last example, let’s have a look at advances in materials science as another disruptive innovation. Materials science is rapidly transforming the way everything from cars to light bulbs is made. The ability to understand the properties of materials at the tiniest scales not only lets people do old things better; it lets them do new things. This is what some scientists describe as a ‘golden age’ for materials.
The process of manipulating materials at a molecular level has made nanomaterials possible. Advocates of nanotechnology talk of building things atom by atom. The result is a flood of new substances and ideas for ways of using them. Such breakthroughs have already transformed ordinary materials such as carbon and ceramics to take on surprising new properties – greater reactivity, unusual electrical properties and greater strength.
For example, carbon-fibre is not only used to engineer lighter aircrafts, but also by BMW for its electric car i-series. The resulting structure, although stronger than steel, is at least 50 per cent lighter, and also about 30 per cent lighter than aluminium. Nor does it corrode. Since the carbon-fibre body provides the vehicle with its strength, the outer panels are mainly decorative and made from plastic. These are simple to spray in a small paint booth, whereas metal requires elaborate anti-corrosion treatment in a costly paint shop. In all, the BMW i3 factory uses 50 per cent less energy and 70 per cent less water than a conventional facility.
Having ever better tools and instruments, the researchers are also benefiting from a massive increase in available computing power. This allows them to explore the properties of virtual materials in detail before deciding whether to make something out of them.
Nanomaterials have already been used in products ranging from pharmaceuticals to sunscreens and even bicycle frames. Now, as MGI explains, new materials are being created that have attributes such as enormous strength and elasticity and remarkable capabilities such as self-healing and self-cleaning. Smart materials and memory metals (which can revert to their original shapes) are finding applications in a range of industries such as aerospace, pharmaceuticals, and electronics.
As reported in the Economist, Gerbrand Ceder from the University of California, Berkeley, together with Kristin Persson, of the Lawrence Berkeley National Laboratory, founded the Materials Project – an open-access venture using a cluster of supercomputers to compile the properties of all known and predicted compounds. The idea is that, instead of setting out to find a substance with the desired properties for a particular job, researchers will soon be able to define the properties they require and their computers will provide them with a list of suitable candidates. This will provide what the people working on the project call the ‘materials genome’: a list of the basic properties – conductivity, hardness, elasticity, ability to absorb other chemicals and so on – of all the compounds anyone might think of.
“In ten years, someone doing materials design will have all these numbers available to them, and information about how materials will interact,” says Mr Ceder. “Before, none of this really existed. It was all trial and error.”
Engineering at the molecular level improves old materials as well as creating new ones, meaning completely new classes of materials. What interests materials scientists is that with modern processing techniques, it is possible to turn many bulk materials into nanoparticles – measured as 100 nanometres (billionth of a metre) or less. The reason for doing so is that nanoparticles can take on new or greatly enhanced properties because of quantum mechanics and other effects. This includes unique physical, chemical, mechanical and optical characteristics which are related to the particles’ size. Engineers can capture some of those properties by incorporating nanoparticles into their materials.
Manufacturers are coming under growing pressure to take responsibility for the life cycle of their products. This involves an obligation to consider all the energy, environmental and health effects of every stage, from materials extraction to production, distribution and, eventually, recycling or disposal. As materials become more complex, this is becoming trickier.
The traditional way of gauging what effects a new material will have on the wider world is to go by the elements. If something has lead in it, for instance, it is probably not good for you. If it has a bit of manganese, it is probably safe. “That is so old-fashioned,” says Mr Ceder. “Very often what these things do to your body depends on the form, not the chemistry.”
That makes nanoparticles particularly difficult. A lot of research is being done on their environmental and health implications, but much of it is inconclusive.
And here is my concern in raising the red flag for nano-materials! We shouldn’t just look at the first tempting effects of a use-case but far beyond as here lies a huge risk – even danger – because there are still no (secure) filtering or collecting techniques for disposal available. If nano-materials and their artificially engineered structure of atoms enter the cycle of whatever kind of ‘disposal’ after their first use, they could end up in the environment and our bodies, creating hazards to our cells and cell membranes, with very likely disastrous consequences in 30-40 years.
So here we surely need the strictest responsibilities, rules, liabilities and control over any kind of disposal or guaranteed implementation of a ‘self-destruction-mechanism’ within these nano-materials after a certain amount of time.
In the next parts of this series, Reinhold Karner will tackle the remaining two disruptive forces, and conclusions and recommendations will follow.