More About Singularity
Here is a very interesting 80-minute Ken Humbs film Building Gods Rough Cut made on 2006 including Hugo de Garis and Kevin Warwick, artificail intelligence researchers and Nick Bostrom, philosopher.
And also below is a new 44-minute Next World's video Future of Intelligence including Ray Kurzweil [who coined the term "singularity" along with Joel Garreau, Vernor Vinge and Bruce Sterling], Guido Jouret, Hiroshi Ishiguro, Stephen Jacobsen, Rod Humble, Seth Goldstein, Dave Evans, Michel Parent, Stephane Aubarbier, Jeff Kleiser, Marthin De Beer, Steve Kieron, Marie Hattar, Brian Conte, James Kuffner, Kevin Warwick..
].
Illustrative table of Who's Who in Singularity: Click here for a large version of this chart [PDF format].
Signs of the Singularity
By Vernor Vinge
This is part of IEEE Spectrum's SPECIAL REPORT: THE SINGULARITY
I think it's likely that with technology we can in the fairly near future create or become creatures of more than human intelligence. Such a technological singularity would revolutionize our world, ushering in a posthuman epoch. If it were to happen a million years from now, no big deal. So what do I mean by “fairly near” future? In my 1993 essay, “The Coming Technological Singularity,” I said I'd be surprised if the singularity had not happened by 2030. I'll stand by that claim, assuming we avoid the showstopping catastrophes—things like nuclear war, superplagues, climate crash—that we properly spend our anxiety upon.
In that event, I expect the singularity will come as some combination of the following:
The AI Scenario: We create superhuman artificial intelligence (AI) in computers.
The IA Scenario: We enhance human intelligence through human-to-computer interfaces—that is, we achieve intelligence amplification (IA).
The Biomedical Scenario: We directly increase our intelligence by improving the neurological operation of our brains.
The Internet Scenario: Humanity, its networks, computers, and databases become sufficiently effective to be considered a superhuman being.
The Digital Gaia Scenario: The network of embedded microprocessors becomes sufficiently effective to be considered a superhuman being.
The essays in this issue of IEEE Spectrum use similar definitions for the technological singularity but variously rate the notion from likely to totally bogus. I'm going to respond to arguments made in these essays and also mine them for signs of the oncoming singularity that we might track in the future.
Philosopher Alfred Nordmann criticizes the extrapolations used to argue for the singularity. Using trends for outright forecasting is asking for embarrassment. And yet there are a couple of trends that at least raise the possibility of the technological singularity. The first is a very long-term trend, namely Life's tendency, across aeons, toward greater complexity. Some people see this as unstoppable progress toward betterment. Alas, one of the great insights of 20th-century natural science is that Nature can be the harshest of masters. What we call progress can fail. Still, in the absence of a truly terminal event (say, a nearby gamma-ray burst or another collision such as made the moon), the trend has muddled along in the direction we call forward. From the beginning, Life has had the ability to adapt for survival via natural selection of heritable traits. That computational scheme brought Life a long way, resulting in creatures that could reason about survival problems. With the advent of humankind, Life had a means of solving many problems much faster than natural selection.
In the last few thousand years, humans have begun the next step, creating tools to support cognitive function. For example, writing is an off-loading of memory function. We're building tools—computers, networks, database systems—that can speed up the processes of problem solving and adaptation. It's not surprising that some technology enthusiasts have started talking about possible consequences. Depending on our inventiveness—and our artifacts' inventiveness—there is the possibility of a transformation comparable to the rise of human intelligence in the biological world. Even if the singularity does not happen, we are going to have to put up with singularity enthusiasms for a long time.
Get used to it.
In recent decades, the enthusiasts have been encouraged by an enabling trend: the exponential improvement in computer hardware as described by Moore's Law, according to which the number of transistors per integrated circuit doubles about every two years. At its heart, Moore's Law is about inventions that exploit one extremely durable trick: optical lithography to precisely and rapidly emplace enormous numbers of small components. If the economic demand for improved hardware continues, it looks like Moore's Law can continue for some time—though eventually we'll need novel component technology (perhaps carbon nanotubes) and some new method of high-speed emplacement (perhaps self-assembly). But what about that economic demand? Here is the remarkable thing about Moore's Law: it enables improvement in communications, embedded logic, information storage, planning, and design—that is, in areas that are directly or indirectly important to almost all enterprise. As long as the software people can successfully exploit Moore's Law, the demand for this progress should continue.
The best answer to the question, “Will computers ever be as smart as humans?” is probably “Yes, but only briefly”
Roboticist Hans Moravec may have been the first to draw a numerical connection between computer hardware trends and artificial intelligence. Writing in 1988, Moravec took his estimate of the raw computational power of the brain together with the rate of improvement in computer power and projected that by 2010 computer hardware would be available to support roughly human levels of performance. There are a number of reasonable objections to this line of argument. One objection is that Moravec may have radically underestimated the computational power of neurons. But even if his estimate is a few orders of magnitude too low, that will only delay the transition by a decade or two—assuming that Moore's Law holds.
Another roboticist, Rodney Brooks, suggests in this issue that computation may not even be the right metaphor for what the brain does. If we are profoundly off the mark about the nature of thought, then this objection could be a showstopper. But research that might lead to the singularity covers a much broader range than formal computation. There is great variety even in the pursuit of pure AI. In the next decade, those who credit Moravec's timeline begin to expect results. Interestingly powerful computers will become cheap enough for a thousand research groups to bloom. Some of these researchers will pursue the classic computational tradition that Brooks is doubting—and they may still carry the day. Others will be working on their own abstractions of natural mind functions—for instance, the theory that Christof Koch and Giulio Tononi discuss in their article. Some (very likely Moravec and Brooks himself) will be experimenting with robots that cope with many of the same issues that, for animals, eventually resulted in minds that plan and feel. Finally, there will be pure neurological researchers, modeling increasingly larger parts of biological brains in silico. Much of this research will benefit from improvements in our tools for imaging brain function and manipulating small regions of the brain.
But despite Moravec's estimate and all the ongoing research, we are far short of putting the hardware together successfully. In his essay, Brooks sets several intermediate challenges. Such goals can help us measure the progress that is being made. More generally, it would be good to have indicators and counterindicators to watch for. No single one would prove the case for or against the singularity, but together they would be an ongoing guide for our assessment of the matter. Among the counterindicators (events arguing against the likelihood of the singularity) would be debacles of overweening software ambition: events ranging from the bankruptcy of a major retailer upon the failure of its new inventory management system to the defeat of network-centric war fighters by a transistor-free light infantry. A tradition of such debacles could establish limits on application complexity—independent of any claims about the power of the underlying hardware.
There are many possible positive indicators. The Turing Test—whether a human judge communicating by text alone can distinguish a computer posing as human from a real human—is a subtle but broad indicator. Koch and Tononi propose a version of the Turing Test for machine consciousness in which the computer is presented a scene and asked to “extract the gist of it” for evaluation by a human judge. One could imagine restricted versions of the Turing Test for other aspects of Mind, such as introspection and common sense.
As with past computer progress, the achievement of some goals will lead to interesting disputes and insights. Consider two of Brooks's challenges: manual dexterity at the level of a 6‑year‑old child and object-recognition capability at the level of a 2-year‑old. Both tasks would be much easier if objects in the environment possessed sensors and effectors and could communicate. For example, the target of a robot's hand could provide location and orientation data, even URLs for specialized manipulation libraries. Where the target has effectors as well as sensors, it could cooperate in the solution of kinematics issues. By the standards of today, such a distributed solution would clearly be cheating. But embedded microprocessors are increasingly widespread. Their coordinated presence may become the assumed environment. In fact, such coordination is much like relationships that have evolved between living things.
There are more general indicators. Does the distinction between neurological and AI researchers continue to blur? Does cognitive biomimetics become a common source of performance improvement in computer applications? From an entirely different direction, consider economist Robin Hanson's “shoreline” metaphor for the boundary between those tasks that can be done by machines and those that can be done only by human beings. Once upon a time, there was a continent of human-only tasks. By the end of the 1900s, that continent had become an archipelago. We might recast much of our discussion in terms of the question, “Is any place on the archipelago safe from further inundation?” Perhaps we could track this process with an objective economic index—say, wages divided by world product. However much human wealth and welfare may increase, a sustained decline in the ratio of wages to world product would argue a decline in the human contribution to the economy.
Machine/network life-forms will be faster, more labile, and more varied than what we see in biology. Digital Gaia is a hint of how alien the possibilities are
Some indicators relate different areas of technological speculation. In his essay, physicist Richard A.L. Jones critiques molecular nanotechnology (MNT). Even moderate success with MNT could support Moore's Law long enough to absorb a number of order-of-magnitude errors in our estimates of the computing power of the brain. At the same time, some of the advanced applications that K. Eric Drexler describes—things like cell-repair machines—depend on awesome progress with software. Thus, while success with MNT probably does not need the technological singularity (or vice versa), each would be a powerful indicator for the other.
Several of the essays discuss the plausibility of mind uploads and consequent immortality for “our digitized psyches,” ideas that have recently appeared in serious nonfiction, most notably Ray Kurzweil's The Singularity Is Near. As with nanotechnology, such developments aren't prerequisites for the singularity. On the other hand, the goal of enhancing human intelligence through human-computer interfaces (the IA Scenario) is both relevant and in view. Today a well-trained person with a suitably provisioned computer can look very smart indeed. Consider just a slightly more advanced setup, in which an Internet search capability plus math and modeling systems are integrated with a head‑up display. The resulting overlays could give the user a kind of synthetic intuition about his or her surroundings. At a more intimate but still noninvasive level, DARPA's Cognitive Technology Threat Warning System is based on the idea of monitoring the user's mental activities and feeding the resulting analysis back to the user as a supplement to his or her own attention. And of course there are the researchers working with direct neural connections to machines. Larger numbers of implanted connections may allow selection for effective subsets of connections. The human and the machine sides can train to accommodate each other.
To date, research on neural prostheses has mainly involved hearing, vision, and communication. Prostheses that could restore any cognitive function would be a very provocative indicator. In his essay, John Horgan discusses neural research, including that of T.W. Berger, into prostheses for memory function. In general, Horgan and I reach very different conclusions, but I don't think we have much disagreement about the facts; Horgan cites them to show how distant today's technology is from anything like the singularity—and I am saying, “Look here, these are the sorts of things we should track going forward, as signs of progress toward the singularity (or not).”
The Biomedical Scenario—directly improving the functioning of our own brains—has a lot of similarities to the IA Scenario, though computers would be only indirectly involved, in support of bioinformatics. In the near future, drugs for athletic ability may be only a small problem compared with drugs for intellect. If these mind drugs are not another miserable fad of uppers and downers, if they enable real improvements to memory and creativity, that would be a strong indicator for this scenario. Much further out—for both logistical and ethical reasons—is the possibility of embryo optimization and germ-line engineering. Biomedical enhancement, even the extreme varieties, probably does not scale very well; however, it might help biological minds maintain some influence over other progress.
Brooks suggests that the singularity might happen—and yet we might not notice. Of the scenarios I mentioned at the beginning of this essay, I think a pure Internet Scenario—where humanity plus its networks and databases become a superhuman being—is the most likely to leave room to argue about whether the singularity has happened or not. In this future, there might be all-but-magical scientific breakthroughs. The will of the people might manifest itself as a seamless transformation of demand and imagination into products and policy, with environmental and geopolitical disasters routinely finessed. And yet there might be no explicit evidence of a superhuman player.
A singularity arising from networks of embedded microprocessors—the Digital Gaia Scenario—would probably be less deniable, if only because of the palpable strangeness of the everyday world: reality itself would wake up. Though physical objects need not be individually sapient, most would know what they are, where they are, and be able to communicate with their neighbors (and so potentially with the world). Depending on the mood of the network, the average person might notice a level of convenience that simply looks like marvelously good luck. The Digital Gaia would be something beyond human intelligence, but nothing like human. In general, I suspect that machine/network life-forms will be faster, more labile, and more varied than what we see in biology. Digital Gaia is a hint of how alien the possibilities are.
In his essay, Hanson focuses on the economics of the singularity. As a result, he produces spectacular insights while avoiding much of the distracting weirdness. And yet weirdness necessarily leaks into the latter part of his discussion (even leaving Digital Gaia possibilities aside). AI at the human level would be a revolution in our worldview, but we can already create human-level intelligences; it takes between nine months and 21 years, depending on whom you're talking to. The consequences of creating human-level artificial intelligence would be profound, but it would still be explainable to present-day humans like you and me.
But what happens a year or two after that? The best answer to the question, “Will computers ever be as smart as humans?” is probably “Yes, but only briefly.”
For most of us, the hard part is believing that machines could ever reach parity. If that does happen, then the development of superhuman performance seems very likely—and that is the singularity. In its simplest form, this might be achieved by “running the processor clock faster” on machines that were already at human parity. I call such creatures “weakly superhuman,” since they should be understandable if we had enough time to analyze their behavior. Assuming Moore's Law muddles onward, minds will become steadily smarter. Would economics still be an important driver? Economics arises from limitations on resources. Personally, I think there will always be such limits, if only because Mind's reach will always exceed its grasp. However, what is scarce for the new minds and how they deal with that scarcity will be mostly opaque to us.
The period when economics could help us understand the new minds might last decades, perhaps corresponding to what Brooks describes as “a period, not an event.” I'd characterize such a period as a soft takeoff into the singularity. Toward the end, the world would be seriously strange from the point of view of unenhanced humans.
A soft takeoff might be as gentle as changes that humanity has encountered in the past. But I think a hard takeoff is possible instead: perhaps the transition would be fast. One moment the world is like 2008, perhaps more heavily networked. People are still debating the possibility of the singularity. And then something...happens. I don't mean the accidental construction that Brooks describes. What I'm thinking of would probably be the result of intentional research, perhaps a group exploring the parameter space of their general theory. One of their experiments finally gets things right. The result transforms the world—in just a matter of hours. A hard takeoff into the singularity could resemble a physical explosion more than it does technological progress.
If the singularity happens, the world passes beyond human ken
I base the possibility of hard takeoff partly on the known potential of rapid malcode (remember the Slammer worm?) but also on an analogy: the most recent event of the magnitude of the technological singularity was the rise of humans within the animal kingdom. Early humans could effect change orders of magnitude faster than other animals could. If we succeed in building systems that are similarly advanced beyond us, we might experience a similar incredible runaway.
Whether the takeoff is hard or soft, the world beyond the singularity contains critters who surpass natural humans in just the ability that has so empowered us: intelligence. In human history, there have been a number of radical technological changes: the invention of fire, the development of agriculture, the Industrial Revolution. One might reasonably apply the term singularity to these changes. Each has profoundly transformed our world, with consequences that were largely unimagined beforehand. And yet those consequences could have been explained to earlier humans. But if the transformation discussed in this issue of Spectrum occurs, the world will become intrinsically unintelligible to the likes of us. (And that is why “singularity,” as in “black hole singularity of physics,” is the cool metaphor here.) If the singularity happens, we are no longer the apex of intellect. There will be superhumanly intelligent players, and much of the world will be to their design. Explaining that to one of us would be like trying to explain our world to a monkey.
Both Horgan and Nordmann express indignation that singularity speculation distracts from the many serious, real problems facing society. This is a reasonable position for anyone who considers the singularity to be bogus, but some form of the point should also be considered by less skeptical persons: if the singularity happens, the world passes beyond human ken. So isn't all our singularity chatter a waste of breath? There are reasons, some minor, some perhaps very important, for interest in the singularity. The topic has the same appeal as other great events in natural history (though I am more comfortable with such changes when they are at a paleontological remove). More practically, the notion of the singularity is simply a view of progress that we can use—along with other, competing, views—to interpret ongoing events and revise our local planning. And finally: if we are in a soft takeoff, then powerful components of superintelligence will be available well before any complete entity. Human planning and guidance could help avoid ghastliness, or even help create a world that is too good for us naturals to comprehend.
Horgan concludes that “the singularity is a religious rather than scientific vision.” Brooks is more mellow, seeing “commonalities with religious beliefs” in many enthusiasts' ideas. I argue against Horgan's conclusion, but Brooks's observation is more difficult to dispute. If there were no other points to discuss, then those commonalities would be a powerful part of the skeptics' position. But there are other, more substantive arguments on both sides of the issue.
And of course, the spirituality card can be played against both skeptics and enthusiasts: Consciousness, intelligence, self-awareness, emotion—even their definitions have been debated since forever, by everyone from sophomores to great philosophers. Now, because of our computers, the applications that we are attempting, and the tools we have for observing the behavior of living brains, there is the possibility of making progress with these mysteries. Some of the hardest questions may be ill-posed, but we should see a continuing stream of partial answers and surprises. I expect that many successes will still be met by reasonable criticism of the form “Oh, but that's not really what intelligence is about” or “That method of solution is just an inflexible cheat.” And yet for both skeptics and enthusiasts, this is a remarkable process. For the skeptic, it's a bit like subtractive sculpture, where step-by-step, each partial success is removing more dross, closing in on the ineffable features of Mind—a rather spiritual prospect! Of course, we may remove and remove and find that ultimately we are left with nothing but a pile of sand—and devices that are everything we are, and more. If that is the outcome, then we've got the singularity.
About the Author
VERNOR VINGE, who wraps up this issue, first used the term singularity to refer to the advent of superhuman intelligence while on a panel at the annual conference of the Association for the Advancement of Artificial Intelligence in 1982. Three of his books—A Fire Upon the Deep (1992), A Deepness in the Sky (1999), and Rainbows End (2006)—won the Hugo Award for best science-fiction novel of the year. From 1972 to 2000, Vinge taught math and computer science at San Diego State University.
Joel de Rosnay Talking About Web 4.0 [Symbiotic Web]
Les quatre web de Joel de Rosnay, du 1.0 au 4.0
Maybe this Web 4.0 could look like this:
Holographic Interface - round interface - Ringo from Ivan Tihienko on Vimeo.
The Future of the Web
Or will my next blog be Twistori like, a sort of FriendFeed lifestream in perpetual motion, automaticaly scrolling...endlessly flowing on line.
If my life is transferred into an online scrolling lifestream matrix soon, I wonder how I will be able to manage all this information by following others feeds, without losing too much of my "productive content creation" time?
Already I can hardly manage my Friendfeed lifestream because I'm overloaded by too many feeds from people I consider interesting.
Is the future of the Web [or online information systems] above all, in aggregation and filtering?
Below is an interesting poll from the New York Times article, published few months ago about how people are spending [wasting] their office time.
According to this poll, if we could decrease our online distraction time (or "interruptions" such as reading mails, following others lifestream feeds, chatting, etc...) and the time we spend on searching information , we could increase our "productive" time by 43 % .
And even if aggregation and filtering systems do not eliminate our need for distraction, at least, we will feel a little less lost and overloaded by gathering all this streaming online information.
The question is who can we authorize to aggregate and filter for us?
To be really reliable, this machine has not only to simulate our own value system [reflect our need for security] but also has to be able to push us beyond our intellectual safe zone [make us curious and able to accept the Alterity].
Conference in London on October 29, 2008: "Can we bored with Debord?"
"The Autumn 2008 season of Conversations hosted by Rethinking Cities opens with Alan James Bullion posing the question: "Are we bored with Debord? What can we derive from his concept of drift in the modern city?"
It is forty years since Debord’s “Society of the Spectacle” triggered civil unrest in the streets of Paris.
As the leader of the political art movement known as the Situationists in the early 1960’s, Guy Debord was a proponent of the ‘derive’, to walk through, was to understand the city and its class struggles.
Alan Bullion is the Lib Dem parliamentary candidate for Sevenoaks poses the question: "Are we bored with Debord? What can we derive from his concept of drift in the modern city?"
This Conversation will take place on the evening of Wednesday 29 October 2008 at the Royal Commonwealth Society, Northumberland Avenue, London.
Click here to register for this or other Conversations
Even if Debord's ideas are shaped in a different historical context, I think this topic is not at all anachronistic and that we can still learn a lot from him.
Read on line The Society of the Spectacle by Guy Debord translated in English here or in French [original text] here or take a look at these short videos from a video documentary about Guy Debord's Situationist movement and you will see why.
Situationalist International - Part 1 of 3
Situationalist International - Part 2 of 3
Situationalist International - Part 3 of 3
"The first stage of the economy’s domination of social life brought about an evident degradation of being into having — human fulfillment was no longer equated with what one was, but with what one possessed.
The present stage, in which social life has become completely dominated by the accumulated productions of the economy, is bringing about a general shift from having to appearing — all “having” must now derive its immediate prestige and its ultimate purpose from appearances.
At the same time all individual reality has become social, in the sense that it is shaped by social forces and is directly dependent on them. Individual reality is allowed to appear only if it is not actually real. " Guy Debord : The Society of the Spectacle
Incurable American Optimism
Inteviewed by Mark Molaro, American professor and media expert Paul Levinson is talking here about the state, influence and future of the new media.
Levinson is the author of "Digital McLuhan" and "The Soft Edge" and has appeared in countless media venues from PBS to Fox to offer his insight on media issues.
In this video Levinson discusses the current exponential rise of new media and what Marshall McLuhan would think of the digital age we live and now create in.
I wonder if this kind of 100% favorable and enthusiastic discourse about Web 2.0 and new media is possible in Europe.
Watching this video makes me think of Jean Baudrillard's "hyper-information age" and once again I was reminded of Baudrillard's distinction between the American and European way of thinking:
"Vu d'Amérique et par intellectuls américains [Susan Sontag] , le désaveu de la réaité dans les cultures européennes, et singulièrement dans la théorie française, n'est que le dépit "métaphisique" de ne plus être maître de cette réalité, et la manifestation, à la fois arogante et ironique, de cette impuissance.
Et c'est sans doute vrai.
Mais vice versa: c'est parti pris de la réaité, cet "affirmative thinking", n'est-il pas, chez les Américains, l'expression naive et idéologique du fait qu'ils ont, de par leur puissance, la monoplole de la rélaité?
Nus vivons certes dans la nostalgie ridicule de la gloire [de l'hisoire, de la culture], mais eux vivent dans l'illusion ridicule de la performance." Jean Baudrillard, Cool Memories V
http://holychic.blogspot.com/2008/07/incurable-american-optimism.html
Arnaud Pagès exhibit in Issue Gallery next week
Arnaud Pagès wearing a piece of his street wear collection
Mobile Trends 2008
Presentation "Mobile 2.0 - what is it and why should you care?" by Rudy De Waele at Plugg Conference in Brussels on March 19, 2008
A deep dive into the future of mobile with Rudy De Waele, one of the world's most renowned mobile strategists, featuring a look at historical and upcoming trends, insights on potential revenue models and the industry's leading protagonists.
Mobile and Wireless Trends for 2008
by Rudy De Waele:1. Google’s Android and the Open Handset Alliance will definately take off in 2008. While the iPhone is doing probably the best job embracing mobile and web convergence, the Apple OS is still a closed system and used by a rather small market segment of users. Nokia’s Nseries - though all remarkeable devices - didn’t produce any breakthrough Symbian OS changes last year and is still too buggy to go mass-market - I don’t see my sister or father perform a device software update; which leaves the opportunity for Google and the Open Handset Alliance to get the new Linux-based operating system Android on several cutting-edge smartphones before year-end. Mobile OS, a truely competitive space in 2008!
2. The Rise of the Mobile Social Networks. M:Metrics released some promising data mid-2007 on the rise of the Mobile Social Networks. With the big social media networks all going mobile in 2007 (Facebook, MySpace, YouTube and Bebo, …), this trend will continue to rise in 2008, sustained by more flat rate introductions on different markets.
3. Apple will be seriously attacked by the music industry on its own, once disruptive, iTunes business model. 2008 will be the year of further downfall of DRM and the raise of watermarked audio-files. With Sony BMG planning to drop DRM - the last of the Big Four record labels with Warner Music Group, Universal Music Group and EMI Music, to throw in the towel on digital rights management. The end of DRM might embolden a host of new, online download venues initiated by the Big Four in its searches for a successful digital strategy. Note also the rise of new business models (!) giving away DRM-free, ad-supported music downloads, like the recently founded Rcrd Lbl by Peter Rojas. Read my DRM Free at Last! for a recent overview and links to previous posts on this topic.
4. Telefonica will introduce the 3G iPhone. To be announced at Mobile World Congress in Barcelona in February?
5. The return of the Location-Based Services. Since Nokia introduced the Nseries N95 with built in GPS, Location-Based Services are becoming exciting again. A new wave of mobile services and applications build on the location of the user (cell-ID and/or GPS) will see the light this year, driven by the open Google Maps API and flickr’s geotagged photo function. Read also my early 2005 coverage on the formerly known MoSoSos.
6. First iPhone competitors coming to market. Nokia will introduce a serious competitor for the iPhone. It has the hardware manufacturing intelligence and knowledge to come up with its own multi-touch screenLucidTouch-Profile Feb-08 interface. Biggest challenge for Nokia (and other manufacturers) will be to keep the OS user-experience as simple as the iPhone. Expect some great innovating devices from HTC too in 2008! (checkout the HTC Touch Dual).
7. Mobile Video Blogging starting to taking off. Though still to be used by early adopters, mobile video blogging tools such as Kyte.tv mobile are already doing a great job with Floobs and KaZiVu also looking very promising (both still in beta), not to forget about YouTube Mobile. All eyes will be on Seesmic however that has the right start-up vibe - instigated daily by its impressive experienced shareholders (and web 2.0 icons) and its very active beta-testers community. Imagining Seesmic to be used on your mobile phone is an easy one, the challenges for Seesmic are to bypass the complex technical issues and delivery of its great idea.
8. Mobile search, as already predicted last year will continue to be one of the most important and most used mobile applications. I keep this one in my list adding that some new players might disrupt the big Search market players, not having figured out the real mobile search issues such as accuracy, context, relevance, latency and the correct display of local and niche results.
9. PRM (Personal Rights Management) and Privacy policies and procedures will be high on the agenda for every entreprise and conscious connected individuals. Already talk of the connected crowds at LeWeb3, opening the Social Graphs might appear cool in your social media community but has to be done right! As a starter, check out Dataportability.org and watch Robert Scoble explaining his recent portability issues with Facebook.
10. Twitter and the breakthrough of the ultimate Mobile Presence Tool. Yes, Twitter is the utlimate mobile presence tool, since it’s the easiest to use (through SMS and mobile web access), and most accurate to stay connected at any time from anywhere… Jaiku has a definately a richer client but Twitter is the most easily integrated into most of your social networks, checkout MoodBlast that can simultaneously update multiple chat clients and web services presence tools. 2008 will also see the rise of lifestreaming apps like Tumblr, surprisingly simple on the web and looks great on your mobile phone.
Saturday, 23 Feb.2008 : New Gallery Launches
Galerie e.l Bannwarth,
68, rue Julien Lacroix, 75020 Paris, Métro Belleville
Culture is only about the numbers?
This article recently published in Time magazine is a good example of the basic misunderstanding between the European and American comprehension of culture.
The term "culture" is here simply confused with the term "entertainment".
One could almost believe, on reading this article, that for Americans the culture is ONLY the entertainment and one of consumers good.
The Death of French Culture
TIME magazine, Wednesday, Nov. 21, 2007
By DON MORRISON/PARIS
The days grow short. A cold wind stirs the fallen leaves, and some mornings the vineyards are daubed with frost.
Yet all across France, life has begun anew: the 2007 harvest is in. And what a harvest it has been. At least 727 new novels, up from 683 for last autumn's literary rentrée. Hundreds of new music albums and dozens of new films. Blockbuster art exhibitions at all the big museums. Fresh programs of concerts, operas and plays in the elegant halls and salles that grace French cities. Autumn means many things in many countries, but in France it signals the dawn of a new cultural year.
And nobody takes culture more seriously than the French.
They subsidize it generously; they cosset it with quotas and tax breaks. French media give it vast amounts of airtime and column inches.
Even fashion magazines carry serious book reviews, and the Nov. 5 announcement of the Prix Goncourt — one of more than 900 French literary prizes — was front-page news across the country. (It went to Gilles Leroy's novel Alabama Song.)
Every French town of any size has its annual opera or theater festival, nearly every church its weekend organ or chamber-music recital.
There is one problem. All of these mighty oaks being felled in France's cultural forest make barely a sound in the wider world. Once admired for the dominating excellence of its writers, artists and musicians, France today is a wilting power in the global cultural marketplace.
That is an especially sensitive issue right now, as a forceful new President, Nicolas Sarkozy, sets out to restore French standing in the world. When it comes to culture, he will have his work cut out for him.
LITERATURE:
Only a handful of the season's new novels will find a publisher outside France. Fewer than a dozen make it to the U.S. in a typical year, while about 30% of all fiction sold in France is translated from English.
That's about the same percentage as in Germany, but there the total number of English translations has nearly halved in the past decade, while it's still growing in France. Earlier generations of French writers — from Molière, Hugo, Balzac and Flaubert to Proust, Sartre, Camus and Malraux — did not lack for an audience abroad.
Indeed, France claims a dozen Nobel literature laureates — more than any other country — though the last one, Gao Xingjian in 2000, writes in Chinese.
FILM:
France's movie industry, the world's largest a century ago, has yet to recapture its New Wave eminence of the 1960s, when directors like François Truffaut and Jean-Luc Godard were rewriting cinematic rules.
France still churns out about 200 films a year, more than any other country in Europe. But most French films are amiable, low-budget trifles for the domestic market. American films account for nearly half the tickets sold in French cinemas. Though homegrown films have been catching up in recent years, the only vaguely French film to win U.S. box-office glory this year was the animated Ratatouille — oops, that was made in the U.S. by Pixar.
The Paris art scene, birthplace of Impressionism, Surrealism and other major -isms, has been supplanted, at least in commercial terms, by New York City and London.
Auction houses in France today account for only about 8% of all public sales of contemporary art, calculates Alain Quemin, a researcher at France's University of Marne-La-Vallée, compared with 50% in the U.S. and 30% in Britain.
In an annual calculation by the German magazine Capital, the U.S. and Germany each have four of the world's 10 most widely exposed artists; France has none. (sic!)
An ArtPrice study of the 2006 contemporary-art market found that works by the leading European figure — Britain's Damien Hirst — sold for an average of $180,000. The top French artist on the list, Robert Combas, commanded $7,500 per work.
MUSIC:
France does have composers and conductors of international repute, but no equivalents of such 20th century giants as Debussy, Satie, Ravel and Milhaud.
In popular music, French chanteurs and chanteuses such as Charles Trenet, Charles Aznavour and Edith Piaf were once heard the world over.
Today, Americans and Brits dominate the pop scene. Though the French music industry sold $1.7 billion worth of recordings and downloads last year, few performers are famous outside the country. Quick: name a French pop star who isn't Johnny Hallyday.
France's diminished cultural profile would be just another interesting national crotchet — like Italy's low birthrate, or Russia's fondness for vodka — if France weren't France. This is a country where promoting cultural influence has been national policy for centuries, where controversial philosophers and showy new museums are symbols of pride and patriotism.
Moreover, France has led the charge for a "cultural exception" that would allow governments to keep out foreign entertainment products while subsidizing their own.
French officials, who believe such protectionism is essential for saving cultural diversity from the Hollywood juggernaut, once condemned Steven Spielberg's 1993 Jurassic Park as a "threat to French identity."
They succeeded in enshrining the "cultural exception" concept in a 2005 UNESCO agreement, and regularly fight for it in international trade negotiations.
Accentuate the positive In addition, France has long assigned itself a "civilizing mission" to improve allies and colonies alike. In 2005, the government even ordered high schools in France to teach "the positive role" of French colonialism, i.e. uplifting the natives. (The decree was later rescinded.)
Like a certain other nation whose founding principles sprang from the 18th century Enlightenment, France is not shy about its values. As Sarkozy recently observed: "In the United States and France, we think our ideas are destined to illuminate the world."
Sarkozy is eager to pursue that destiny. The new President has pledged to bolster not just France's economy, work ethic and diplomatic standing — he has also promised to "modernize and deepen the cultural activity of France."
Details are sketchy, but the government has already proposed an end to admission charges at museums and, while cutting budgets elsewhere, hiked the Culture Ministry's by 3.2%, to $11 billion.
Whether such efforts will have much impact on foreign perception is another matter. In a September poll of 1,310 Americans for Le Figaro magazine, only 20% considered culture to be a domain in which France excels, far behind cuisine.
Domestic expectations are low as well. Many French believe the country and its culture have been in decline since — pick a date: 1940 and the humiliating German occupation; 1954, the start of the divisive Algerian conflict; or 1968, the revolutionary year which conservatives like Sarkozy say brought France under the sway of a new, more casual generation that has undermined standards of education and deportment.
For French of all political colors, déclinisme has been a hot topic in recent years. Bookstores are full of jeremiads like France is Falling, The Great Waste, The War of the Two Frances and The Middle Class Adrift.
Talk-show guests and opinion columnists decry France's fading fortunes, and even the French rugby team's failure at the World Cup — held in France this year — was chewed over as an index of national decay. But most of those laments involve the economy, and Sarkozy's ascension was due largely to his promise to attend to them.
Cultural decline is a more difficult failing to assess — and address. Traditionally a province of the right, it speaks to the nostalgia of some French for the more rigorous, hierarchical society of the 19th and early 20th centuries.
Paradoxically, that starchy era inspired much of France's subsequent cultural vitality.
"A lot of French artists were created in opposition to the education system," says Christophe Boïcos, a Paris art lecturer and gallery owner. "Romantics, Impressionists, Modernists — they were rebels against the academic standards of their day. But those standards were quite high and contributed to the impressive quality of the artists who rebelled against them."
The taint of talkiness Quality, of course, is in the eye of the beholder — as is the very meaning of culture. The term originally referred to the growing of things, as in agriculture. Eventually it came to embrace the cultivation of art, music, poetry and other "high-culture" pursuits of a high-minded élite.
In modern times, anthropologists and sociologists have broadened the term to embrace the "low-culture" enthusiasms of the masses, as well as caste systems, burial customs and other behavior.
The French like to have it all ways. Their government spends 1.5% of GDP supporting a wide array of cultural and recreational activities (vs. only 0.7% for Germany, 0.5% for the U.K. and 0.3% for the U.S.). The Culture Ministry, with its 11,200 employees, lavishes money on such "high-culture" mainstays as museums, opera houses and theater festivals.
But the ministry also appointed a Minister for Rock 'n' Roll in the 1980s to help France compete against the Anglo-Saxons (unsuccessfully). Likewise, parliament in 2005 voted to designate foie gras as a protection-worthy part of the nation's cultural heritage.
Cultural subsidies in France are ubiquitous. Producers of just about any nonpornographic movie can get an advance from the government against box-office receipts (most loans are never fully repaid).
Proceeds from an 11% tax on cinema tickets are plowed back into subsidies. Canal Plus, the country's leading pay-TV channel, must spend 20% of its revenues buying rights to French movies.
By law, 40% of shows on TV and music on radio must be French. Separate quotas govern prime-time hours to ensure that French programming is not relegated to the middle of the night.
The government provides special tax breaks for freelance workers in the performing arts. Painters and sculptors can get subsidized studio space.
The state also runs a shadow program out of the Foreign Ministry that goes far beyond the cultural efforts of other major countries.
France sends planeloads of artists, performers and their works abroad, and it subsidizes 148 cultural groups, 26 research centers and 176 archaeological digs overseas.
With all those advantages, why don't French cultural offerings fare better abroad?
One problem is that many of them are in French, now merely the world's 12th most widely spoken language (Chinese is first, English second).
Worse still, the major organs of cultural criticism and publicity — the global buzz machine — are increasingly based in the U.S. and Britain. "In the '40s and '50s, everybody knew France was the center of the art scene, and you had to come here to get noticed," says Quemin. "Now you have to go to New York."
Another problem may be the subsidies, which critics say ensure mediocrity. In his widely discussed 2006 book On Culture in America, former French cultural attaché Frédéric Martel marvels at how the U.S. can produce so much "high" culture of lofty quality with hardly any government support. He concludes that subsidy policies like France's discourage private participants — and money — from entering the cultural space. Martel observes: "If the Culture Ministry is nowhere to be found, cultural life is everywhere."
Other critics warn that protecting cultural industries narrows their appeal. With a domestic market sheltered by quotas and a language barrier, French producers can thrive without selling overseas. Only about 1 in 5 French films gets exported to the U.S., 1 in 3 to Germany. "If France were the only nation that could decide what is art and what is not, then French artists would do very well," says Quemin. "But we're not the only player, so our artists have to learn to look outside."
Certain aspects of national character may also play a role.
Abstraction and theory have long been prized in France's intellectual life and emphasized in its schools.
Nowhere is that tendency more apparent than in French fiction, which still suffers from the introspective 1950s nouveau roman (new novel) movement.
Many of today's most critically revered French novelists write spare, elegant fiction that doesn't travel well. Others practice what the French call autofiction — thinly veiled memoirs that make no bones about being conceived in deep self-absorption. Christine Angot received the 2006 Prix de Flore for her latest work, Rendez-vous, an exhaustively introspective dissection of her love affairs. One of the few contemporary French writers widely published abroad, Michel Houellebecq, is known chiefly for misogyny, misanthropy and an obsession with sex. "In America, a writer wants to work hard and be successful," says François Busnel, editorial director of Lire, a popular magazine about books (only in France!). "French writers think they have to be intellectuals."
Conversely, foreign fiction — especially topical, realistic novels — sells well in France. Such story-driven Anglo-Saxon authors as William Boyd, John le Carré and Ian McEwan are over-represented on French best-seller lists, while Americans such as Paul Auster and Douglas Kennedy are considered adopted sons.
"This is a place where literature is still taken seriously," says Kennedy, whose The Woman in the Fifth was a recent best seller in French translation. "But if you look at American fiction, it deals with the American condition, one way or another. French novelists produce interesting stuff, but what they are not doing is looking at France."
French cinema has also suffered from a nouveau roman complex. "The typical French film of the '80s and '90s had a bunch of people sitting at lunch and disagreeing with each other," quips Marc Levy, one of France's best-selling novelists. (His Et si c'Etait Vrai... , published in English as If Only It Were True, became the 2005 Hollywood film Just Like Heaven starring Reese Witherspoon and Mark Ruffalo.) "An hour and a half later, they are sitting at dinner, and some are agreeing while others are disagreeing." France today can make slick, highly commercial movies — Amélie, Brotherhood of the Wolf — but for many foreigners the taint of talkiness lingers.
The next act How to make France a cultural giant again? One place to start is the education system, where a series of reforms over the years has crowded the arts out of the curriculum. "One learns to read at school, one doesn't learn to see," complains Pierre Rosenberg, a former director of the Louvre museum.
To that end, Sarkozy has proposed an expansion of art-history courses for high schoolers. He has also promised measures to entice more of them to pursue the literature baccalaureate program. Once the most popular course of study, it is now far outstripped by the science and economics-sociology options. "We need literary people, pupils who can master speech and reason," says Education Minister Xavier Darcos. "They are always in demand."
Sarkozy sent a chill through the French intelligentsia last summer by calling for the "democratization" of culture.
Many took this to mean that cultural policy should be based on market forces, not on professional judgments about quality. With more important adversaries to confront — notably the pampered civil-service unions — Sarkozy is unlikely to pick a fight over cultural subsidies, which remain vastly popular.
But the government may well try to foster private participation by tinkering with the tax system. "In the U.S. you can donate a painting to a museum and take a full deduction," says art expert Boïcos. "Here it's limited. Here the government makes the important decisions. But if the private sector got more involved and cultural institutions got more autonomy, France could undergo a major artistic revival."
Sarkozy's appointment of Christine Albanel as Culture Minister looks like a vote for individual initiative: as director of Versailles, she has cultivated private donations and partnerships with businesses. The Louvre has gone one step further by effectively licensing its name to offshoots in Atlanta and Abu Dhabi.
A more difficult task will be to change French thinking. Though it is perilous to generalize about 60 million people, there is a strain in the national mind-set that distrusts commercial success.
Opinion polls show that more young French aspire to government jobs than to careers in business.
"Americans think that if artists are successful, they must be good," says Quemin.
"We think that if they're successful, they're too commercial. Success is considered bad taste."
At the same time, other countries' thinking could use an update. Britain, Germany and the U.S. in particular are so focused on their own enormous cultural output that they tend to ignore France. Says Guy Walter, director of the Villa Gillet cultural center in Lyon: "When I point out a great new French novel to a New York publisher, I am told it's 'too Frenchy.' But Americans don't read French, so they don't really know."
What those foreigners are missing is that French culture is surprisingly lively. Its movies are getting more imaginative and accessible. Just look at the Taxi films of Luc Besson and Gérard Krawczyk, a rollicking series of Hong Kong-style action comedies; or at such intelligent yet crowd-pleasing works as Cédric Klapisch's L'Auberge Espagnole and Jacques Audiard's The Beat That My Heart Skipped, both hits on the foreign art-house circuit.
French novelists are focusing increasingly on the here and now: one of the big books of this year's literary rentrée, Yasmina Reza's L'Aube le Soir ou la Nuit (Dawn Dusk or Night) is about Sarkozy's recent electoral campaign.
Another standout, Olivier Adam's A l'Abri de Rien (In the Shelter of Nothing), concerns immigrants at the notorious Sangatte refugee camp. France's Japan-influenced bandes dessinées (comic-strip) artists have made their country a leader in one of literature's hottest genres: the graphic novel.
Singers like Camille, Benjamin Biolay and Vincent Delerm have revived the chanson. Hip-hop artists like Senegal-born MC Solaar, Cyprus-born Diam's and Abd al Malik, a son of Congolese immigrants, have taken the verlan of the streets and turned it into a sharper, more poetic version of American rap.
Therein may lie France's return to global glory. The country's angry, ambitious minorities are committing culture all over the place. France has become a multiethnic bazaar of art, music and writing from the banlieues and disparate corners of the nonwhite world. African, Asian and Latin American music get more retail space in France than perhaps any other country.
Movies from Afghanistan, Argentina, Hungary and other distant lands fill the cinemas. Authors of all nations are translated into French and, inevitably, will influence the next generation of French writers.
Despite all its quotas and subsidies, France is a paradise for connoisseurs of foreign cultures. "France has always been a country where people could come from any country and immediately start painting or writing in French — or even not in French," says Marjane Satrapi, an Iranian whose movie based on her graphic novel Persepolis is France's 2008 Oscar entry in the Best Foreign Film category. "The richness of French culture is based on that quality."
And what keeps a nation great if not the infusion of new energy from the margins? Expand the definition of culture a bit, and you'll find three fields in which France excels by absorbing outside influences.
First, France is arguably the world leader in fashion, thanks to the sharp antennae of its cosmopolitan designers.
Second, French cuisine — built on the foundation of Italian and, increasingly, Asian traditions — remains the global standard.
Third, French winemakers are using techniques developed abroad to retain their reputation for excellence in the face of competition from newer wine-growing regions.
Tellingly, many French vines were long ago grafted onto disease-resistant rootstocks from, of all places, the U.S. "We have to take the risk of globalization," says Villa Gillet's Guy Walter. "We must welcome the outside world."
Jean-Paul Sartre, the giant of postwar French letters, wrote in 1946 to thank the U.S. for Hemingway, Faulkner and other writers who were then influencing French fiction — but whom Americans were starting to take for granted. "We shall give back to you these techniques which you have lent us," he promised. "We shall return them digested, intellectualized, less effective, and less brutal — consciously adapted to French taste. Because of this incessant exchange, which makes nations rediscover in other nations what they have invented first and then rejected, perhaps you will rediscover in these new [French] books the eternal youth of that 'old' Faulkner."
Thus will the world discover the eternal youth of France, a nation whose long quest for glory has honed a fine appreciation for the art of borrowing.
And when the more conventional minds of the French cultural establishment — along with their self-occupied counterparts abroad — stop fretting about decline and start applauding the ferment on the fringes, France will reclaim its reputation as a cultural power, a land where every new season brings a harvest of genius.
source: http://www.time.com/time/magazine/article/0,9171,1686532,00.html
With reporting by Grant Rosenberg/Paris
We live in exponential times
The idea being that shifts in society are presenting us with some very real issues which must be addressed along with some startling facts.
The facts:
-In the next eight seconds, 32 babies will be born.
-Of the world's 2006 College graduates: 1.3 million came from the U.S., 3.1 million came from India, and 3.3 million came from China.
-Of the 3.1 million graduates from India one hundered percent of them speak English.
-It is estimated that in ten years the worlds largest population of English speakers will be from China
-One in four workers has been with their current employer for less than one full year.
-The United States Department of Labor estimates that todays students will have about 10 jobs by the time they are 38.
-Most of today's college majors did not even exist ten years ago, majors like: New media, Organicly produced Agriculture, e-buisiness, nanotechnology, and Homeland Security.
-Today's 21-year-olds have viewed 20,000 hours of television, played 10,000 hours of video games, talked 10,000 hours talking on the phone, sent 250,000 emails or instant messages, and have created more than %50 of today's internet content.
-70% of today's 4-year-olds have used the internet.
-It took 38 years for the radio to reach a market of 50 million people. It only took the TV 13 years to reach the 50 million mark. But it only took 4 years for the internet to reach the 50 million mark.
-The number of internet devices jumped frm 1000 in 1984 to 600,000,000 in 2006.
-The first commercial text message was sent in 1992, and today the number of text messages sent on any given day exceeds the population of our planet.
-The internet was first used widely in 1995, and in 2006, one in eight married couples met online.
-In this month alone there were 2.7 billion searches performed on the Google search engine.-There are curently more than 540,000 words in the English language, more than 5 times as many as Shakespeare's time.
-Today the amount of new technical information doubles every two years, and by 2010 it is predicted that the new information will double every two days.
-Currently the fiberoptic technology in use can push 10 trillion bits per second down a strand of fiber, this translates into 1,900 compact disks (CDs) or 150 million simultaneous telephone calls a second.
-There are nearly 2 billion children who live in developing countries, one in three of these children do not complete the fifth grade. The One Laptop per Child Project set out to change this by providing laptops for these children.
-It is believed that by the time the childen born in 2007 are six years of age a super computer will have more computational power than he human brain.
-It is predicted that by 2048 there will be a computer that costs $1,000 that will surpass the entire human race in computational power.
What does it all mean?
SHIFT HAPPENS
Today's student's are being trained to perform tasks that don't exist and technologies which we do not yet have, in order to solve the problems we don't know know are problems yet.
The truth of the matter WE LIVE IN EXPONENTIAL TIMES
What does that mean exactly?
It means that our society is growing in exponential leaps and bounds and there are things we must realize to prepare for the future.
Entering the zone of total transparence
This 5 minutes video produced by Italian Web Consulting Agency, Casaleggio Associati is aimed to predict the future of the Internet in 43 years.
Shortly, the Net will include and unify not only the media content but also our private lifes.
And the big winner will be Google. On 2051 we will live an overall Second Life named Prometeus. You will ONLY present your avatar and you will not exist out of Prometeus.
Devices that replicate the five senses will be available in the virtual worlds. We will really feel and live in the Second Life (ooops, Prometeus controlled by Google). BTW, even this blog is owned by Google and probably its collecting more information about me than I'm able to do for myself.
New providence or sharp viral ad? Should we regret that we will not be alive on 2051 when our lifes will be so restrained by overall Google control that the only way to become what you want is not by living your own life but only by living the (marketable!) life of your avatar in the Second Life ?
But on the other hand, it's the only sure wager of our eternity - maybe we will be still alive on 2051 only thanks to Google who are collecting everything about us right now (watch the video on the end of this post).
But first, read the complete text of the video above:
"Man is God. He is everywhere, he is anybody, he knows everything.
This is the Prometeus new world. All started with the Media Revolution, with Internet, at the end of the last century.
Everything related to the old media vanished: Gutenberg, thecopyright, the radio, the television, the publicity.
The old world reacts: more restrictions for the copyright, new laws against non authorized copies. Napster, the music peer to peer company is sued.
At the same time, free internet radio appears; TIVO, the internet television, allows to avoid publicity; the Wall Street Journal goes on line; Google launches Google news.
Millions of people read daily the biggest on line newspaper.
OhMyNews written by thousands of journalists;
Flickr becomes the biggest repository in the history of photos, YouTube for movies.
The power of the masses.
A new figure emerges: the prosumer, a producer and a consumer of information. Anyone can be a prosumer.
The news channels become available on Internet. The blogs become more influential than the old media. The newspapers are released for free.
Wikipedia is the most complete encyclopedia ever.
In 2007 Life magazine closes (sic!) The NYT sells its television and declares that the future is digital. BBC follows. In the main cities of the world people are connected for free. At the corners of the streets totems print pages from blogs and digital magazines.
The virtual worlds are common places on the Internet for millions of people.A person can have multiple on line identities. Second Life launches the vocal avatar.
The old media fight back. A tax is added on any screen; newspapers, radios and televisions are financed by the State; illegal download from the web is punished with years of jail.
Around 2011 the tipping point is reached: the publicity investments are done on the Net. The electronic paper is a mass product: anyone can read anything on plastic paper.
In 2015 newspapers and broadcasting television disappear, digital terrestrial is abandoned, the radio goes on the Internet. The media arena is less and less populated. Only the Tyrannosaurus Rex survives.
The Net includes and unifies all the content.
Google buys Microsoft. Amazon buys Yahoo! and become the world universal content leaders with BBC, CNN and CCTV.
The concept of static information - books, articles, images - changes and is transformed into knowledge flow.
The publicity is chosen by the content creators, by the authors and becomes information, comparison, experience.
In 2020 Lawrence Lessig, the author of 'Free Culture', is the new US Secretary of Justice and declares the copyright illegal.
Devices that replicate the five senses are available in the virtual worlds. The reality could be replicated in Second Life.
Any one has an Agav (agent-avatar) that finds information, people, places in the virtual worlds.
In 2022 Google launches Prometeus, the Agav standard interface.
Amazon creates Place, a company that replicates reality.
You can be on Mars, at the battle of Waterloo, at the Super Bowl as a person. It's real.
In 2027 Second Life evolves into Spirit. People become who they want. And share the memory. The experiences. The feelings. Memory selling becomes a normal trading.
In 2050 Prometeus buys Place and Spirit. Virtual life is the biggest market on the planet. Prometeus finances all the space missions to find new worlds for its customers: the terrestrial avatar.
Experience is the new reality."
Voice: Philip K. Dick Avatar !
Watch This Short Movie About The Power Of Google
3 Winners of 2 Major Modern Art Awards During the Modern Art Week in Paris
As usual, two major French modern art awards: Duchamp Award and the Ricard Company Foundation Award were dedicated during the October "Modern Art Week" in Paris when lots of modern art fairs and events are taking a place.
The seventh Duchamp Award designed by modern art collectors association, ADIAF (Association for Intl. Distribution of French Art), in partnership with the Pompidou Center and the FIAC, has been dedicated to Tatiana Trouvé, born on 1968 in Italy and living in Paris (represented by Galerie Almine Rech).
The awards include 35,000 euros, publication of a catalog by the Pompidou Center and privilege to expose her works for two months in L'Espace 315 in the Beaubourg on Spring 2008.
The Ricard Company Foundation Award is dedicated by a collectors panel to two artists from Marseille, Christophe Berdaguer (born on 1968) and Mary Péjus (born on 1969) (represented by Galerie Martine Aboucaya) for their installation Dreamland / Disappear Here (2007).
Purchased for 15,000 euros, it will be offered to the Pompidou Center, which will exhibit this sculpture in its permanent collection. Until November 17, 2007, this piece will be exposed at the Foundation Ricard, 12, rue Boissy d'Anglas as part of the exhibition "Drift" (designed by Mathieu Mercier).
FIAC You, 46 years ago in Paris
Claudia Cardinale and late Jean-Claude Brialy in the movie The Lions Are Loose [Les lions sont lâchés, 1961] directed by Henri Verneuil
FIAC 2007 Opened Last Night at the Cour Carée of the Louvre Museum
and works by artist from all over the world...