Welcome to the Club of Amsterdam Journal. CERN is the European Organization for Nuclear Research near Geneva, Switzerland, and famous for its particle accelerators, such as the Large Hadron Collider. CERN is the real global scientific endeavour of all times. It has so many aspects and dimensions that it is hard to grasp its ubiquitous impact on human lives. The Einstein heritage also contains the deep drive to push up the boundaries of our knowledge. As a young kid Einstein dreamed of surfing on a light wave. This dream is – now transformed – still the drive of a lot of scientists from the CERN community. But also the relation between our daily environment and the fundamental level of particles is a seducing one, and fascinating. Join us at the event about the future of CERN – June 3 Felix Bopp, editor-in-chief |
What is the future of natural gas in Europe?
by Michael Akerib, Rusconsult
Gas consumption has seen a major increase in its use for the production of electricity in both Europe and the US as it is a less polluting fuel than coal while having a higher efficiency. The International Energy Agency forecasts an annual increase in demand of 1.5% in the next ten to twenty years.
This increased use may, however, well put consuming countries, and the European Union in particular, into a higher dependency ratio – of up to 80% – with regards to its suppliers, since gas extraction is geographically concentrated in a limited number of countries some of which have dwindling resources or require considerable investments to maintain their present production levels, or both. Russia’s estimate is that investments of the order of 300 billion dollars will be required over the next 20 years to meet demand. In the case of Russia, large investments to maintain an archaic pipeline system are also required.
Should these large investments not be made, supplies will be reduced and Russia would most likely increase prices to maintain its revenue flow.
Europe’s natural gas suppliers are Norway, Russia, Central Asia and North Africa. Supplies from Iran are out of the question for the moment. Europe’s dependency might induce its suppliers to select its clients on the basis of a political agenda.
The world’s leading gas producers are Russia, the United States, Canada, Iran and Norway. Several European countries are now proceeding with exploratory drillings but results are so far unavailable.
Europe’s environmental concerns will no doubt lead to a reduced demand, of up to one-third by 2030 according to some forecasts, while the opposite will be true for Asia. Competition between the two continents for adequate supply will increasingly weigh on suppliers’ decisions with regard to the building of transport routes from Russia, Central Asia or the Gulf. These routes are significantly longer and therefore more expensive to reach the Asian markets than Europe. However, China’s massive foreign exchange reserves enable it to finance the building of gas pipelines over large distances and difficult terrain but the availability of Australian gas may not require them to make this outlay.
China has also struck deals with foreign companies that will be drilling in China for shale gas. The government has set a target of producing 50% of its gas requirements.
Another issue that is a cause of worry is the possibility of the creation of a gas producer’s cartel but that is unlikely as long as the market is dominated by long-term contracts and that producers and consumers have little flexibility in view of their link through the fixed structure that is a pipeline. Further, the interests of the gas producers are far from being homogeneous.
The readiness of some consumers to invest in the necessary re-gasification terminals – and the assumption that corresponding liquefaction investments in developing countries will follow in uncertain markets where a five-year lead time maybe perceived as too long – to receive LNG (Liquified Natural Gas) reduces the dependence on the closest supplier which is Russia. It has, as an added advantage, the fact that LNG prices are not automatically indexed on oil prices.
Price-wise, the indexation of the price of natural gas to that of oil, which is the case at present, has its limits as in case of a strong increase in the price of oil, which every pundit has been forecasting for the last half a dozen years, nuclear energy could become an even more attractive alternative than at present. More particularly, the cost of energy produced by a nuclear power station remains essentially unchanged over long periods of time, the risks being at a different level – safety and availability of uranium.
An increase in Europe’s reliance on nuclear power would lessen its dependence on Russian gas.
Investments in renewable energies remain small and therefore any impact these sources can have in a period of gas shortage remains marginal.
Improvements in energy productivity would obviously have a major impact in gas imports in consuming countries. So would the discovery of major gas fields on the Old Continent, although production may be delayed by ecological fears of contamination of the aquifers. Russia would be the country most negatively affected as it is proceeding to develop fields that are difficult, and thus costly, to operate. Thus, the Shtokman project, in the Arctic, for instance, has been postponed.
An incitement for consuming countries to obtain a stability of supplies consists in allowing suppliers to acquire local companies and thus integrate downstream in what is sometimes the most profitable end of the industry. However, he European Energy Charter regulates sales of infrastructure to non-EU companies. This has not stopped Russia’s Gazprom from acquiring a portfolio of companies and planning a major expansion downstream even though Russia, just like Venezuela, restricts foreign investment in their own infrastructure. Whether this will continue if demand drops remains to be seen.
Political uncertainty is one more parameter to be taken into account in a developing market in which decisions on very large investments have to be made in situations of great uncertainty.
Next Event
the future of CERN
CERN is the European Organization for Nuclear Research near Geneva, Switzerland, and famous for its particle accelerators, such as the Large Hadron Collider.
Thursday, June 3, 2010
Registration: 18:30-19:00, Conference: 19:00-21:15
Location: WTC – World Trade Center, Metropolitan Boardroom of Amsterdam In Business, D tower 12th floor, Strawinskylaan 1, 1077 XW Amsterdam
Socratic conversation with
Dr. Sergio Bertolucci, Director for Research and Computing, CERN moderated by Humberto Schwab, Philosopher, Physicist
At the press event at CERN for Angels&Demons, left to right: Sergio Bertolucci (CERN Director for Research and Scientific Computing), Tara Shears (Liverpool University and the LHCb experiment), Tom Hanks, Ayelet Zurer, Rolf Landua (CERN) and Ron Howard.
Restaurant of the Future
The Restaurant of the Future is a cooperation between scientists of Wageningen UR, catering company Sodexho, Noldus software developers, and professional kitchen supplier Kampri Group.
The Restaurant of the Future is a unique blend of research and practice aimed at something as common as eating and drinking. It is a place to experiment with new food products, preparation methods and self-service systems, and also a facility allowing close observation of consumer eating and drinking behaviour. Everything involving the Restaurant of the Future is unique. An environment only in its kind where scientists can observe restaurant frequenters in conditioned situations over a prolonged period of time. This research may include behaviour, food choice, design and layout, the influence of lighting, presentation, traffic flow, taste, packing, preparation and countless other aspects involving out of home eating and drinking.The Restaurant of the Future comprises two parts:
A company restaurant
open to visitors who declared they have no objection being under close observation by cameras. Environmental aspects such as colour and lighting can be manipulated in the restaurant for research purposes
A sensory consumer research lab
can be used for businesses to assess their products – under various circumstances – for smell, colour and taste.
A “living lab” for consumer studies Sensory laboratory with 16 cabins, incl. physiological measurements and face recognition Physiological laboratory (e.g. Taste/smell & EEG measurements) Multifunctional room: 8 extra sensory booths, training sessions, brainstorm sessions (concepts, flop analysis) Mood rooms to study the effects of situational factors on eating behaviour, on well-being, on performance Kitchens (4 in total) (Kampri Group) to study convenience of preparation, usage and cooking behaviour; to develop and test new preparationtechniques; direct link between product development and consumertesting Restaurant with flexible interiour (Sodexo) All rooms equipped with video cameras (total 35 cameras; Noldus) Imaging and data synchronisation & integration software
Club of Amsterdam blog
Club of Amsterdam blog
http://clubofamsterdam.blogspot.com
March 6: The world needs a new taxation paradigm
March 3: Transformative Thinking
January 8: Mobile Trends 2020
October 6: … Just as Beauty lies in the Eyes of the Beholder … is Wisdom found in the Mind of the Receiver
September 21: Future Connectivity: Healthcare Revolution for Community Development
August 5: Music 3.0 and the rocky pre-media past
News about the Future
Australia plan will divert textiles from landfill
The Technical Textiles & Nonwoven Association (TTNA) released an engagement proposal for its bid to develop the Australian Fibre & Textile Environmental and Recycling Cooperative Research Centre (AFTER-CRC).
Responding to intensified interest in the reuse, reclamation and regeneration of fibrousma terial, the TTNA is seeking engagement from manufactures, retailers and research and education institutions.
Kerryn Caulfield, Executive Manager of the TTNA says: “While fibrous inputs are the TCF industry’s greatest overhead, (approx) one million tonnes of fibrous waste are buried in Australian landfills every year. Fibrous waste is an unrealised source of valuable raw materials that can be reclaimed for further use by developing frontier technology.” Ms Caulfield believes that the proposed CRC will provide interested companies with head start access to a new industry and an opportunity to exploit the commercial opportunities resulting from the centres’ activities. “Who would have thought thirty years ago that fortunes would be made from recycling goods such as paper and tyres? “
Inuit knowledge helps science learn something new about Arctic weather
Inuit forecasters equipped with generations of environmental knowledge are helping scientists learn something new about Arctic weather. A study published this month in the journal Global Environmental Change brings together two worlds, combining indigenous environmental knowledge with the practice of statistical weather analysis. The study, a collaboration between researchers at the National Snow and Ice Data Center (NSIDC) and the University of Colorado at Boulder’s Cooperative Institute for Research in Environmental Sciences (CIRES), shows that including the observations and stories of the Inuit into climate research can not only provide valuable insights into asking the right scientific questions, but help researchers find new ways of answering them.
“It’s interesting how the western approach in my mind is often trying to understand things without necessarily experiencing them,” said Elizabeth Weatherhead, a research scientist with University of Colorado at Boulder’s Cooperative Institute for Research in Environmental Sciences. “With the Inuit, it’s much more of an experiential issue, and I think that fundamental difference brings a completely different emphasis both in defining what the important scientific questions are, and discerning how to address them.”
Recommended Book
Collider: The Search for the World’s Smallest Particles (Kindle Edition)
by Paul Halpern
An accessible look at the hottest topic in physics and the experiments that will transform our understanding of the universe
The biggest news in science today is the Large Hadron Collider, the world’s largest and most powerful particle-smasher, and the anticipation of finally discovering the Higgs boson particle. But what is the Higgs boson and why is it often referred to as the God Particle? Why are the Higgs and the LHC so important? Getting a handle on the science behind the LHC can be difficult for anyone without an advanced degree in particle physics, but you don’t need to go back to school to learn about it. In Collider, award-winning physicist Paul Halpern provides you with the tools you need to understand what the LHC is and what it hopes to discover.
The Dawn of the Intelligent Planet
IBM Forum Slovenia 7-8 April, 2010
Keynote Speech
The Dawn of the Intelligent Planet
Ladies and Gentlemen, good morning
First of all, I’d like to thank the organizers of this event, for inviting me to speak to you this morning.
Today, I’d like to share with you some fundamental observations that should lead to a better understanding of how the future of technology must, and will likely, unfold, so that our planet can evolve in a more positive way.
My talk today carries the title: The Dawn of the Intelligent Planet
We can draw from this title three implications:
- Something important is about to begin
- It involves the concept of Intelligence, and
- It doesn’t concern only the future of our local world, but also that of our entire planet!
Today I’d like to talk to you about the next important evolutionary step of humankind, which, I believe, will be known in the history books of the future as the period of transition from “Human Intelligence” to “Human-Machine Intelligence”!
We are all here to participate in this amazing experience that will hopefully change the future of humankind in a radically new, positive way. Now, let me make a rather provoking statement:
The only way of ensuring a safe and healthy evolution for our planet is by handing over all of our vital decisions and planning tasks to intelligent machines. Full Stop!
Now, some of you may immediately begin envisioning large packs of dangerously out-of-control, “gone crazy” robots that will roam the planet, enslave the world, and end the freedom of humankind as we know it. As a former clinical psychologist, however, I can tell you that if these fearful thoughts do cross your mind, you are clearly “projecting.” That is, you’re projecting your own predatory human nature onto machines…
But let me allay your fears at this point: No machine, no matter how intelligent, will ever be as unpredictable and dangerous as humans are today. Computer intelligence, as it will develop in this new era of human-machine intelligence, will be much safer, reasonable, and predictable than we, humans have ever been, at any time in our history. Just consider this: You can take a hammer, and smash a notebook computer in front of 50 other computers, and they will not even care, much less attack you for it…
If you are not convinced yet, consider, for a brief moment, the collective human behavior of the past few millennia! I believe the record speaks for itself: Humans still are as they have always been: lethal predators, eager to kill, tireless seekers of opportunities to expand their power and possessions, regardless of the endless misery they inflict on themselves and others.
Man has not changed! Not since the very early dark ages of humanity, some millions of years ago, when it all began. He is still the most dangerous and rapacious of all the creatures on the planet. Along these lines, man has built social, political and economic systems, which he has named ‘civilization’ and ‘civilized behavior’, but which are nothing more than very clever ways of practicing his ancient instincts of hunting and killing. To give a modern example, ‘man’ has invented financial markets: a zero-sum game, where one can gain only at the expense of another, with no regard to the collective detriment or the high cost that one’s gains may inflict on everyone else.
And yet, man also has the most amazing and generous creative abilities, having produced, over the centuries, unequaled artistic and technological masterpieces that have indeed the potential of changing, for the better, his life and that of all the other creatures on this planet. Man has also invented machines. But like men, these machines can do nearly everything and more. They can build or destroy; communicate or hide in opaque secrecy; they can calculate, predict, make a space vehicle land on Mars with the precision of a square-meter, or deliver a deadly missile across 1000 miles into the bedroom window of an enemy…
Now, the question we must ask ourselves is this: with more than six billion humans on the planet, equipped with these powerful technologies that can destroy or built up the Earth’s valuable infrastructures, how do we manage the ever accelerating evolution of more and more effective machines, given man’s unquenchable thirst for domination?
How do we manage man’s willingness to engage in conflict, even if it takes his own life, or that of millions of others? Can we trust man in this more and more complex world to make local decisions that have global effects? Human management of this planet is truly in question!
Something is really wrong here, because the human condition is not improving, in spite of all our wonderful abilities and beautiful innovations. On a global level, hunger and poverty are vastly increasing; economic distributions on a global scale are dangerously unequal, and even in the best of societies we have lost the sense of what is truly valuable in life. We are increasingly the slaves of communication devices, overwhelming information systems, and technology structures that have not adapted to human needs but, rather, forced humans to adopt their lifestyle to the intrusive technological infrastructures of this planet.
Only a few months ago, I attended a public debate on the question if it were possible to live a week without any communication devices: for one week simply go back to the life we lived in the 19th century, with no phone, no TV, no Internet; spending time with the family in the evening, sitting together around the dinner table, and perhaps, reading from an interesting book to the family, by way of collective evening entertainment.
During this debate, a lady in her late 80’s, a former professor of history, explained in great detail how it all was in her childhood. Things happened very slowly then. People still had time to think. People lived in some ways a harder life, but they were happier. The mental illness of depression was much rarer in those days. By the end of the debate, most agreed that a perfect vacation would be one without any technology… going back to basics… being “off”, rather then being “on” all the time.
For me as a dedicated futurist and technologist, this was a clear sign that we are still missing a very essential ingredient, before technology becomes a true and positive catalyst in human development. In other words, we need to develop the next step in the evolution of humankind, the Human-Machine Intelligence. What is at stake here is no less than a process of co-evolution, in which humans and machines will become partners in creating a new mentality and better forms of life for everyone concerned.
Let us remember that the development of technology is in fact a leading component of global human development. We MUST become aware of and live up to the exigencies of this new form of symbiosis and co-evolution. In this room, we all are closely related to technological development and its innovation. We are the people that are accountable for the very important transitions of humanity in the future. We must live up to this responsibility, today and in the future!
So let me summarize my argument up to this point: if we are to usher in the dawn of Planetary Intelligence, we need to attend to the co-evolution of humans and machines. I’ll now look, first, at the machine part of the equation.
One of the fundamental problems of today’s technology is that it still requires the participation of humans to function. In fact, we are occupied more than ever to interact with all of these devices and machines that we have build, rather than let them just do their work automatically. Technology is absorbing us, rather then helping us.
Machines can do things very fast, but humans are very slow in evaluating the results of their performance. Again, the financial industry is a good example here, where computer-based, algorithmic trading becomes more and more a decision-processor in the millisecond space, where billions of dollars are being exposed to transactions that can no longer be followed by humans in real time.
On the one hand, machines have evolved to a point where they can do substantial tasks, and act and react at speed levels that are highly uncomfortable or even intractable for humans. On the other hand, the machines of today still lack true intelligence, and therefore, they can cause substantial disasters, usually at high speeds that can substantially magnify the negative results.
But we must also recognize that computing machines are beginning to close the gap between learning and acting. To refer, again, to the financial industry, computers now begin to read all global news automatically, analyze its content in milliseconds, and deploy trades instantaneously. But, of course, there is a disastrously weak link in this technology: the logic of executing trades is hard-coded by the so-called Quantitative Analysts. Herein lies the whole problem: should there be a change in the nature of data input, then the interpretation of these news feeds must change as well. But the current generation of machines can’t react, they are not allowed to deviate from their hard-coded instructions! Therefore, these machines are only half-smart. And that’s the point!
What is needed is intelligent machines– not hard-coded rules, but true machine intelligence.
Current artificial-intelligence systems are, as a rule, one of two types: logic-based or probability-based. But researchers, including myself, have developed lately new technologies and computer languages such as MIT’s Church, or my own Quantum Relation Technology Language called for short QRT, that combine the best aspects of each type, and make AI smarter, more humanlike.
It started with AI researchers, back in the 1950s, who thought of the human mind as a set of rules to be programmed. Thus, they developed systems based on logical inferences: “if you know that birds can fly and are told that the Eagle is a bird, you can infer that Eagles can fly.”
But with rule-based AI, every exception had to be accounted for. And we learned the systems couldn’t figure out that there were types of birds that couldn’t fly; they had to be told so explicitly, by coding these exceptions into the program. Later AI models gave up these extensive rule sets and turned to probabilities: “a computer is fed lots of examples of something – like pictures of birds – and is left to infer on its own what those examples have in common.”
Church and QRT are both “grand unified theories of AI” with both systems creating probability-based rules that are constantly revised as the system encounters new situations. For example, a Church or QRT program that has never encountered a flightless bird might, initially, set the probability that any bird can fly at 99 percent. But as it learns more about the Ostrich or the Penguins, and caged and broken-winged birds – it revises its probabilities continuously. Eventually, the probabilities represent most of the conceptual distinctions that early AI researchers would have had to code by hand. But the system learns those distinctions itself over time – much the way humans learn new concepts and revise old ones. In the early years of my research in defining new models of AI, I also called this approach “Human-Emulated Artificial Intelligence.”
Today we know that these new approaches surpass already current AI models. Newly developed applications in which, for example, a QRT system was deployed to make predictions based on a set of observations, did a “significantly better job of modeling intelligent returns, than traditional artificial-intelligence algorithms did in the past.”
Of course, these new technologies still need further improvements and specific operations are extremely “computationally intensive” when they tackle broader-based problems. I am sure that the Hardware division of IBM is delighted about this fact, because it insures the prolific sales of supercomputers well into the future.
Nevertheless, this is only the beginning!
New systems must begin to model global problems, and must have the ability to understand and process interdisciplinary problems in parallel, internally and continuously. Such global systems must have the ability to contain all local problems within; they must be globally connected and must fully account for the “butterfly effect.”
And this, ladies and gentlemen, requires the building of a fully interconnected and intelligent planet! This, of course, also brings me back to the beginning of my talk and to the other term in our co-evolutionary equation: the human factor.
I’d like to say this once again:
Intelligent machines should not only solve systematic local problems, more importantly, they must become an important and responsible part of global human development. As one of my close friends and collaborators, Prof. Mihai I. Spariosu from the University of Georgia, in the US, has argued, global intelligence is the ability to understand, respond to, and work toward what will benefit all human beings and will support and enrich all life on this planet. Global intelligence is based on the collective awareness of the interdependence of all localities within a global frame of reference and the enhanced individual responsibilities that result from this inter-dependence.
As no national or supra-national authority can predefine or predetermine it, global intelligence involves long-term, collective learning processes and can emerge only from continuing intercultural connectivity, open dialogue, and peaceful cooperation of all members of the planet.
The phrase “what will benefit all human beings” in this context, however, should not be understood in the utilitarian, restricted sense that implies the excessive, materialistic focus over the wellbeing of humanity in general. The new models of global intelligence will sooner or later give humans back their freedom to no longer be overly concerned with the management, or even the productivities of this planet, but with the responsible enhancement, stewardship, and enjoyment of its beautiful gifts.
The science fictions of the 60s and the 70s in the last century always envisioned the year 2000 as a futuristic society where computers did all the management, and the robots did all the productivity. But the year 2000 came and went… and there is still no trace of such a society. What is still missing is this “unified theory of intelligence” that would enable us to build our societies, based on global intelligence, which is in turn based on the co-evolution and symbiosis between man and machine.
We are now working towards such goals. But we must not stop developing the tools needed to get there.
So, let me highlight some of the basic technologies and infrastructures that need to be developed to bring about the “Intelligent Planet” in the foreseeable future:
- We need to develop massive, supercomputer-driven, global knowledge centers that manage all of the Earth’s open-source data globally, and analyze its content in an interdisciplinary and intercultural form.
- We must connect all these global knowledge centers, so that they can become a globally connected mash of knowledge depositories
- We must also develop a global mash of networked sensory devices and data extraction technologies that collect information of any kind 24 hours a day, 7 days a week and transport such information in real time into the knowledge depositories
- We must develop the best AI solutions possible, to continually search our knowledge depositories for deep-rooted patterns and understandings that point to globally responsible opportunities and common planetary risks
- We must build access technologies that allow anyone at any time to access these analytical knowledge depositories and use its information at no cost, to allow human development independent from economical power
- We must build broadcast technologies that use a world-standard format, and continuously broadcasting streams of data and information, which include any type of risk or corrective information that may be vital to human development
- We must develop these solutions as global utility that remains free of charge, and free of all political and dogmatic influences
- All vital human services, such as global commerce, healthcare, education or even governance must become a global solution with local subsets
- All local machine intelligence must have full access to these automated global knowledge networks
- All global information must remain open-source knowledge, available to all members of this planet
Implementing these 10 points will be the vital base for the Intelligent Planet.
And finally, we must realize this:
Today, there is only one serious technology player left on this planet that can take us to this future, that is, the future of the Intelligent Planet. The player I am talking about is IBM. It will take astronomical amounts of investment into newer and faster hardware technologies, comprehensive commitments to develop new global middleware and interconnecting mash technologies, as well as other similar systems, to work toward this Intelligent Planet
But one thing is clear: We must make the Intelligent Planet our most important goal. Given the complexity of modern society and the desideratum of continuous, peaceful human development we MUST work toward its successful accomplishment, and not be stuck in debates about its necessity.
So, I encourage everybody in this room, to be part of one of the most challenging, but also most rewarding frontiers of our millennium: The Dawn of the Intelligent Planet.
Thank You!
See also The Dawn of the Intelligent Planet
IDEO – a global design and innovation firm
IDEO‘s focus lies at the intersection of insight and inspiration, and is informed by business, technology, and culture.
“Design thinking is an approach that uses the designer’s sensibility and methods for problem solving to meet people’s needs in a technologically feasible and commercially viable way. In other words, design thinking is human-centered innovation.” – Tim Brown, CEO, IDEO
… just a few projects
As part of Living Climate Change, IDEO New York imagines Manhattan in 2030 as a city without trash. When most people consider the environmental impact of trash, steamy landfills and smoky incinerators come to mind. The reality, however, is that these are merely endpoints of a much larger system. Come take a look or view more scenarios at Living Climate Change, a place to discuss the most defining design challenge of our time.
Improving the long-haul flight experience for international travelers
Air New Zealand provides passenger and cargo services within the country and to and from Australia, the South Pacific, Asia, North America, and the United Kingdom. Over the past decade, the airline has recovered from near bankruptcy and become a strong regional competitor. In 2009, Air New Zealand served more than 12 million passengers, and Condé Nast Traveller ranked it as the No. 2 long-haul leisure carrier worldwide and also recently won the Air Transport World Airline of the Year Award. Air New Zealand is slated to be the first customer for Boeing’s long-awaited Dreamliner 787-9, due in 2013.
To prepare for the launch of its new Boeing 777-300 aircraft in November 2010, Air New Zealand scrutinized its current long-haul offering. The company asked IDEO to rethink the entire experience – from the cabin’s layout and equipment, such as the seating in economy and business class, to the in-flight service and entertainment and even their customers’ experience inside and beyond the terminal.
Drawing on its human-centered design expertise, IDEO quickly understood that any service provided by a national carrier should reflect the culture of the country it represents. This proved especially true for New Zealand, a country very proud of its airline and of what it has to offer the world. The creative team immersed itself in the culture, making a half-dozen trips abroad, and spending an intensive month in the North and South Islands. The team gained a deep understanding of the New Zealanders’ way and through this understood that New Zealand-style customer service should be generous, humble, and thoroughly democratic. This research and its findings, a series of collaborative workshops, the construction of full-scale seating prototypes, and the creation of a video outlining new service scenarios and opportunities, inspired and empowered Air New Zealand to share the unique aspects of its culture with future travelers in a pioneering way.
Together, Air New Zealand and IDEO revamped the airline’s equipment, service, and technology strategy. Innovative seats will allow travelers one of two desired experiences: connection and socialization or solitude and retreat. Their reconfigurable design permits each passenger a level of interaction with (or privacy from) others that was previously reserved only for those in first class. In addition to best-in-class video and gaming, in-flight entertainment will allow travelers, Kiwi and foreigner alike, to share their experiences, photos and recommendations with each other, making plans and preserving memories for the life that follows disembarkation. The airline’s service strategy, both onboard and on the ground, will shift to celebrate the people, rather than the landscape, of New Zealand – giving crew and passenger alike opportunities to interact and form meaningful connections. Policies and procedures were crafted to give travelers more control of their space, of their time, of meeting their demands and ultimately over having an enjoyable and memorable flight. Creating their own technology platform was essential to delivering on this promise of improved and individualized in-flight experiences at scale. IDEO worked with Air New Zealand to understand what they could do – build, buy, or partner – with a view towards near-term implementation.
“[IDEO] reminded us that with the world’s longest hauls, we had a greater obligation than any other airline to give passengers more,” Ed Sims, Air New Zealand’s GM of International Airline said. “We wanted a creative agency to really challenge our own creative talent [and] IDEO was a standout. [With their help] we’ve reinvented everything we do and given choice and control back to the passenger.”
Ethical consumerism concepts for Oxfam GB
In 2007 and 2008, Oxfam partnered with IDEO to increase awareness of ethical consumption as a means of alleviating social and environmental problems. To better understand consumer behaviors, IDEO and Oxfam sought opinions from shoppers, finding that most are loyal to trusted brands, that aesthetics trump virtue, and that consumers ideally want brands to act ethically. The resulting design principles reinforced sexiness over sacrifice and included such concepts as a guerrilla marketing campaign and an environmental impact evaluation program.
Futurist Portrait: John R Grizz Deal
John R Grizz Deal is CEO of Hyperion Power Generation. Grizz has over twenty years of experience in technology commercialization and fast growing ventures, in both product development and chief executive roles. Grizz was previously the managing director at Purple Mountain Ventures (PMV), serving a dozen international firms on product development, capital expansion, and marketing. Grizz also serves as a director of the U.S. National Lab Seed Fund, a venture capital fund focused on innovations developed by the U.S. Department of Energy laboratory complex. Grizz is the former TVC entrepreneur in residence with the U.S. Department of Energy/NNSA, visiting entrepreneur at Los Alamos National Laboratory (LANL), chief marketing officer for Space Imaging, the founder and former CEO of LizardTech, and a consulting scientist at LANL.
Grizz is a frequent speaker and writer on energy technology and policy, starting and growing advanced technology-based ventures, and issues in raising capital for such ventures. He is a member of the board of directors for the International Clean Energy Alliance, and serves on the boards of several PMV portfolio firms. Grizz has raised over $200 million in risk capital for his ventures and holds graduate and undergraduate degrees in geography from Texas A&M University.
“The Hyperion Power Module was originally conceived to provide clean, affordable power for remote industrial applications such as oil sands operations,” said Deal. “Yet, the initial enthusiasm has been from those needing reliable electricity for communities. The big question for the 21st century is, ‘how do we provide safe energy to those who need it, indeed those developing nations who demand it, without contributing to climate change?’ Today’s safer, proliferation-resistant nuclear power technology is the answer, but it’s not feasible for every community to be tied to a large nuclear power plant. Some communities, those that need power for just the most basic humanitarian infrastructure, such as clean water production for household use and irrigation, are too remote for conventional nuclear power. This is where the Hyperion Power Module, a safe, secure, transportable power generator can help.”
Agenda
Season Program 2009 / 2010 | ||
June 3, 2010 18:30-21:15 | the future of CERN Location: WTC – World Trade Center, Metropolitan Boardroom of Amsterdam In Business, D tower 12th floor, Strawinskylaan 1, 1077 XW Amsterdam | |
Leave a Reply