Showing posts with label Prospective. Show all posts
Showing posts with label Prospective. Show all posts

The author of The Information Age on Network Society



Long (58 min) but very interesting talk of sociologist Manuel Castells on identity and social change in the network society. Taped eight years ago, but not outdated at all.

http://holychic.blogspot.com/2009/07/author-of-internet-galaxy-on-internet.html

US NOW, Ivo Gormley's docu on eDemocracy is in full online now

Thousands of people tuned in on May 12, 2009 to watch the film Us Now and view the launch events’ panel discussions in London and Harvard coordinated by FutureGov and the British Council.

Us Now is a one hour film project about the power of mass collaboration, government and the internet. It tells the stories of online networks that are challenging the existing notion of hierarchy. For the first time, it brings together the fore-most thinkers in the field of participative governance to describe the future of government.

Us Now takes a look at how this type of participation could transform the way that countries are governed. It tells the stories of the online networks whose radical self-organising structures threaten to change the fabric of government forever.

Contributors: Clay Shirky, Don Tapscott, Charles Leadbeater, William Heath, Martin Sticksl, Lee Bryant, Tom Steinberg, Ed Miliband, George Osborne, Saul Albert, Mikey Weinkove, Sunny Hundal, Sophia Parker, JP Rangaswami, Paul Miller, Becky Hogge, Matthew Taylor, MT Rainy, Giles Andrews, Paul Miller, Shane Kelly, Liam Daish

The film follows the fate of Ebbsfleet United, a football club owned and run by its fans;

Zopa, a bank in which everyone is the manager;

Couch Surfing, a vast online network whose members share their homes with strangers,

Directionless.info, get recommendations and info from the 'net while you're out on the street, by doing what comes naturally: asking people

HorsesMouth.co.uk, if you search for a mentor, or want to be a mentor

School Of Everything, helps teachers and learners find each other. If you search for a teacher, or want to be a teacher

Slice the Pie,help you to a piece of the music industry today. Get in touch Artists, Fans and Investors.

Ideal Government where you can say what you want from e-enabled government. Let's observe government first-hand.

The People Speak a campaign to engage young people on the global issues that will shape their future

Morecambe & Heysham Model Railway Club

Social Inovation Lab For Kent

Green Party Canada

The Point

Headshift

Ethical hacker

TheyWorkForYou.com


Director: Ivo Gormley, May 2009
1 hour and 32 seconds, United Kingdom
License: CC - Attribution Non-commercial No Derivatives

Us Now from Banyak Films on Vimeo.

More on Us Now Blog

Watch the movie in larger format

http://holychic.blogspot.com/2009/06/us-now-10-translations-dotsub.html

THE FUTURIST magazine's Top Ten Forecasts for 2009 and beyond


Cover story from December 2008 edition of The Futurist magazine

OUTLOOK 2009 More sex, fewer antidepressants; more transparency online, less privacy in real life. These are among the World Future Society’s latest roundup of more than 70 forecasts for your changing world. PDF available



THE FUTURIST magazine's Top Ten Forecasts for 2009 and beyond.

Forecast # 1: Everything you say and do will be recorded by 2030. By the late 2010s, ubiquitous unseen nanodevices will provide seamless communication and surveillance among all people everywhere. Humans will have nanoimplants, facilitating interaction in an omnipresent network. Everyone will have a unique Internet Protocol (IP) address. Since nano storage capacity is almost limitless, all conversation and activity will be recorded and recoverable. — Gene Stephens, “Cybercrime in the Year 2025,” THE FUTURIST July-Aug 2008.


Forecast #2: Bioviolence will become a greater threat as the technology becomes more accessible. Emerging scientific disciplines (notably genomics, nanotechnology, and other microsciences) could pave the way for a bioattack. Bacteria and viruses could be altered to increase their lethality or to evade antibiotic treatment.— Barry Kellman, “Bioviolence: A Growing Threat,” THE FUTURIST May-June 2008.


Forecast #3: The car's days as king of the road will soon be over. More powerful wireless communication that reduces demand for travel, flying delivery drones to replace trucks, and policies to restrict the number of vehicles owned in each household are among the developments that could thwart the automobile’s historic dominance on the environment and culture. If current trends were to continue, the world would have to make way for a total of 3 billion vehicles on the road by 2025. — Thomas J. Frey, “Disrupting the Automobile’s Future,” THE FUTURIST, Sep-Oct 2008.


Forecast #4: Careers, and the college majors for preparing for them, are becoming more specialized. An increase in unusual college majors may foretell the growth of unique new career specialties.
Instead of simply majoring in business, more students are beginning to explore niche majors such as sustainable business, strategic intelligence, and entrepreneurship.
Other unusual majors that are capturing students' imaginations: neuroscience and nanotechnology, computer and digital forensics, and comic book art. Scoff not: The market for comic books and graphic novels in the United States has grown 12% since 2006. —THE FUTURIST, World Trends & Forecasts, Sep-Oct 2008.


Forecast #5: There may not be world law in the foreseeable future, but the world's legal systems will be networked. The Global Legal Information Network (GLIN), a database of local and national laws for more than 50 participating countries, will grow to include more than 100 counties by 2010. The database will lay the groundwork for a more universal understanding of the diversity of laws between nations and will create new opportunities for peace and international partnership.— Joseph N. Pelton, "Toward a Global Rule of Law: A Practical Step Toward World Peace," THE FUTURIST Nov-Dec 2007.


Forecast #6: The race for biomedical and genetic enhancement will — in the twenty-first century — be what the space race was in the previous century. Humanity is ready to pursue biomedical and genetic enhancement, says UCLA professor Gregory Stock, the money is already being invested, but, he says, “We'll also fret about these things — because we're human, and it's what we do.” — Gregory Stock quoted in THE FUTURIST, Nov-Dec 2007.


Forecast #7: Professional knowledge will become obsolete almost as quickly as it's acquired. An individual's professional knowledge is becoming outdated at a much faster rate than ever before. Most professions will require continuous instruction and retraining. Rapid changes in the job market and work-related technologies will necessitate job education for almost every worker. At any given moment, a substantial portion of the labor force will be in job retraining programs. — Marvin J. Cetron and Owen Davies, "Trends Shaping Tomorrow's World, Part Two," THE FUTURIST May-June 2008.


Forecast #8: Urbanization will hit 60% by 2030. As more of the world's population lives in cities, rapid development to accommodate them will make existing environmental and socioeconomic problems worse. Epidemics will be more common due to crowded dwelling units and poor sanitation. Global warming may accelerate due to higher carbon dioxide output and loss of carbon-absorbing plants. — Marvin J. Cetron and Owen Davies, “Trends Shaping Tomorrow's World,” THE FUTURIST Mar-Apr 2008.


Forecast #9: The Middle East will become more secular while religious influence in China will grow. Popular support for religious government is declining in places like Iraq, according to a University of Michigan study. The researchers report that in 2004 only one-fourth of respondents polled believed that Iraq would be a better place if religion and politics were separated. By 2007, that proportion was one-third. Separate reports reveal a countertrend in China. — World Trends & Forecasts, THE FUTURIST Nov-Dec 2007.


Forecast #10: Access to electricity will reach 83% of the world by 2030. Electrification has expanded around the world, from 40% connected in 1970 to 73% in 2000, and may reach 83% of the world's people by 2030. Electricity is fundamental to raising living standards and access to the world's products and services. Impoverished areas such as Sub-Saharan Africa still have low rates of electrification; Uganda is just 3.7% electrified. — Andy Hines, “Global Trends in Culture, Infrastructure, and Values,” Sep-Oct 2008.

Cover story from December 2008 edition of The Futurist magazine

More About Singularity

or about the question: Will Machines Dominate Humans Soon?

Here is a very interesting 80-minute Ken Humbs film Building Gods Rough Cut made on 2006 including Hugo de Garis and Kevin Warwick, artificail intelligence researchers and Nick Bostrom, philosopher.



And also below is a new 44-minute Next World's video Future of Intelligence including Ray Kurzweil [who coined the term "singularity" along with Joel Garreau, Vernor Vinge and Bruce Sterling], Guido Jouret, Hiroshi Ishiguro, Stephen Jacobsen, Rod Humble, Seth Goldstein, Dave Evans, Michel Parent, Stephane Aubarbier, Jeff Kleiser, Marthin De Beer, Steve Kieron, Marie Hattar, Brian Conte, James Kuffner, Kevin Warwick..

].

Illustrative table of Who's Who in Singularity: Click here for a large version of this chart [PDF format].


Signs of the Singularity
By Vernor Vinge

This is part of IEEE Spectrum's SPECIAL REPORT: THE SINGULARITY


I think it's likely that with technology we can in the fairly near future create or become creatures of more than human intelligence. Such a technological singularity would revolutionize our world, ushering in a posthuman epoch. If it were to happen a million years from now, no big deal. So what do I mean by “fairly near” future? In my 1993 essay, “The Coming Technological Singularity,” I said I'd be surprised if the singularity had not happened by 2030. I'll stand by that claim, assuming we avoid the showstopping catastrophes—things like nuclear war, superplagues, climate crash—that we properly spend our anxiety upon.

In that event, I expect the singularity will come as some combination of the following:


The AI Scenario: We create superhuman artificial intelligence (AI) in computers.


The IA Scenario: We enhance human intelligence through human-to-computer interfaces—that is, we achieve intelligence amplification (IA).


The Biomedical Scenario: We directly increase our intelligence by improving the neurological operation of our brains.


The Internet Scenario: Humanity, its networks, computers, and databases become sufficiently effective to be considered a superhuman being.


The Digital Gaia Scenario: The network of embedded microprocessors becomes sufficiently effective to be considered a superhuman being.


The essays in this issue of IEEE Spectrum use similar definitions for the technological singularity but variously rate the notion from likely to totally bogus. I'm going to respond to arguments made in these essays and also mine them for signs of the oncoming singularity that we might track in the future.

Philosopher Alfred Nordmann criticizes the extrapolations used to argue for the singularity. Using trends for outright forecasting is asking for embarrassment. And yet there are a couple of trends that at least raise the possibility of the technological singularity. The first is a very long-term trend, namely Life's tendency, across aeons, toward greater complexity. Some people see this as unstoppable progress toward betterment. Alas, one of the great insights of 20th-century natural science is that Nature can be the harshest of masters. What we call progress can fail. Still, in the absence of a truly terminal event (say, a nearby gamma-ray burst or another collision such as made the moon), the trend has muddled along in the direction we call forward. From the beginning, Life has had the ability to adapt for survival via natural selection of heritable traits. That computational scheme brought Life a long way, resulting in creatures that could reason about survival problems. With the advent of humankind, Life had a means of solving many problems much faster than natural selection.

In the last few thousand years, humans have begun the next step, creating tools to support cognitive function. For example, writing is an off-loading of memory function. We're building tools—computers, networks, database systems—that can speed up the processes of problem solving and adaptation. It's not surprising that some technology enthusiasts have started talking about possible consequences. Depending on our inventiveness—and our artifacts' inventiveness—there is the possibility of a transformation comparable to the rise of human intelligence in the biological world. Even if the singularity does not happen, we are going to have to put up with singularity enthusiasms for a long time.

Get used to it.

In recent decades, the enthusiasts have been encouraged by an enabling trend: the exponential improvement in computer hardware as described by Moore's Law, according to which the number of transistors per integrated circuit doubles about every two years. At its heart, Moore's Law is about inventions that exploit one extremely durable trick: optical lithography to precisely and rapidly emplace enormous numbers of small components. If the economic demand for improved hardware continues, it looks like Moore's Law can continue for some time—though eventually we'll need novel component technology (perhaps carbon nanotubes) and some new method of high-speed emplacement (perhaps self-assembly). But what about that economic demand? Here is the remarkable thing about Moore's Law: it enables improvement in communications, embedded logic, information storage, planning, and design—that is, in areas that are directly or indirectly important to almost all enterprise. As long as the software people can successfully exploit Moore's Law, the demand for this progress should continue.

The best answer to the question, “Will computers ever be as smart as humans?” is probably “Yes, but only briefly”
Roboticist Hans Moravec may have been the first to draw a numerical connection between computer hardware trends and artificial intelligence. Writing in 1988, Moravec took his estimate of the raw computational power of the brain together with the rate of improvement in computer power and projected that by 2010 computer hardware would be available to support roughly human levels of performance. There are a number of reasonable objections to this line of argument. One objection is that Moravec may have radically underestimated the computational power of neurons. But even if his estimate is a few orders of magnitude too low, that will only delay the transition by a decade or two—assuming that Moore's Law holds.

Another roboticist, Rodney Brooks, suggests in this issue that computation may not even be the right metaphor for what the brain does. If we are profoundly off the mark about the nature of thought, then this objection could be a showstopper. But research that might lead to the singularity covers a much broader range than formal computation. There is great variety even in the pursuit of pure AI. In the next decade, those who credit Moravec's timeline begin to expect results. Interestingly powerful computers will become cheap enough for a thousand research groups to bloom. Some of these researchers will pursue the classic computational tradition that Brooks is doubting—and they may still carry the day. Others will be working on their own abstractions of natural mind functions—for instance, the theory that Christof Koch and Giulio Tononi discuss in their article. Some (very likely Moravec and Brooks himself) will be experimenting with robots that cope with many of the same issues that, for animals, eventually resulted in minds that plan and feel. Finally, there will be pure neurological researchers, modeling increasingly larger parts of biological brains in silico. Much of this research will benefit from improvements in our tools for imaging brain function and manipulating small regions of the brain.
But despite Moravec's estimate and all the ongoing research, we are far short of putting the hardware together successfully. In his essay, Brooks sets several intermediate challenges. Such goals can help us measure the progress that is being made. More generally, it would be good to have indicators and counterindicators to watch for. No single one would prove the case for or against the singularity, but together they would be an ongoing guide for our assessment of the matter. Among the counterindicators (events arguing against the likelihood of the singularity) would be debacles of overweening software ambition: events ranging from the bankruptcy of a major retailer upon the failure of its new inventory management system to the defeat of network-centric war fighters by a transistor-free light infantry. A tradition of such debacles could establish limits on application complexity—independent of any claims about the power of the underlying hardware.

There are many possible positive indicators. The Turing Test—whether a human judge communicating by text alone can distinguish a computer posing as human from a real human—is a subtle but broad indicator. Koch and Tononi propose a version of the Turing Test for machine consciousness in which the computer is presented a scene and asked to “extract the gist of it” for evaluation by a human judge. One could imagine restricted versions of the Turing Test for other aspects of Mind, such as introspection and common sense.

As with past computer progress, the achievement of some goals will lead to interesting disputes and insights. Consider two of Brooks's challenges: manual dexterity at the level of a 6‑year‑old child and object-recognition capability at the level of a 2-year‑old. Both tasks would be much easier if objects in the environment possessed sensors and effectors and could communicate. For example, the target of a robot's hand could provide location and orientation data, even URLs for specialized manipulation libraries. Where the target has effectors as well as sensors, it could cooperate in the solution of kinematics issues. By the standards of today, such a distributed solution would clearly be cheating. But embedded microprocessors are increasingly widespread. Their coordinated presence may become the assumed environment. In fact, such coordination is much like relationships that have evolved between living things.

There are more general indicators. Does the distinction between neurological and AI researchers continue to blur? Does cognitive biomimetics become a common source of performance improvement in computer applications? From an entirely different direction, consider economist Robin Hanson's “shoreline” metaphor for the boundary between those tasks that can be done by machines and those that can be done only by human beings. Once upon a time, there was a continent of human-only tasks. By the end of the 1900s, that continent had become an archipelago. We might recast much of our discussion in terms of the question, “Is any place on the archipelago safe from further inundation?” Perhaps we could track this process with an objective economic index—say, wages divided by world product. However much human wealth and welfare may increase, a sustained decline in the ratio of wages to world product would argue a decline in the human contribution to the economy.

Machine/network life-forms will be faster, more labile, and more varied than what we see in biology. Digital Gaia is a hint of how alien the possibilities are

Some indicators relate different areas of technological speculation. In his essay, physicist Richard A.L. Jones critiques molecular nanotechnology (MNT). Even moderate success with MNT could support Moore's Law long enough to absorb a number of order-of-magnitude errors in our estimates of the computing power of the brain. At the same time, some of the advanced applications that K. Eric Drexler describes—things like cell-repair machines—depend on awesome progress with software. Thus, while success with MNT probably does not need the technological singularity (or vice versa), each would be a powerful indicator for the other.

Several of the essays discuss the plausibility of mind uploads and consequent immortality for “our digitized psyches,” ideas that have recently appeared in serious nonfiction, most notably Ray Kurzweil's The Singularity Is Near. As with nanotechnology, such developments aren't prerequisites for the singularity. On the other hand, the goal of enhancing human intelligence through human-computer interfaces (the IA Scenario) is both relevant and in view. Today a well-trained person with a suitably provisioned computer can look very smart indeed. Consider just a slightly more advanced setup, in which an Internet search capability plus math and modeling systems are integrated with a head‑up display. The resulting overlays could give the user a kind of synthetic intuition about his or her surroundings. At a more intimate but still noninvasive level, DARPA's Cognitive Technology Threat Warning System is based on the idea of monitoring the user's mental activities and feeding the resulting analysis back to the user as a supplement to his or her own attention. And of course there are the researchers working with direct neural connections to machines. Larger numbers of implanted connections may allow selection for effective subsets of connections. The human and the machine sides can train to accommodate each other.

To date, research on neural prostheses has mainly involved hearing, vision, and communication. Prostheses that could restore any cognitive function would be a very provocative indicator. In his essay, John Horgan discusses neural research, including that of T.W. Berger, into prostheses for memory function. In general, Horgan and I reach very different conclusions, but I don't think we have much disagreement about the facts; Horgan cites them to show how distant today's technology is from anything like the singularity—and I am saying, “Look here, these are the sorts of things we should track going forward, as signs of progress toward the singularity (or not).”

The Biomedical Scenario—directly improving the functioning of our own brains—has a lot of similarities to the IA Scenario, though computers would be only indirectly involved, in support of bioinformatics. In the near future, drugs for athletic ability may be only a small problem compared with drugs for intellect. If these mind drugs are not another miserable fad of uppers and downers, if they enable real improvements to memory and creativity, that would be a strong indicator for this scenario. Much further out—for both logistical and ethical reasons—is the possibility of embryo optimization and germ-line engineering. Biomedical enhancement, even the extreme varieties, probably does not scale very well; however, it might help biological minds maintain some influence over other progress.

Brooks suggests that the singularity might happen—and yet we might not notice. Of the scenarios I mentioned at the beginning of this essay, I think a pure Internet Scenario—where humanity plus its networks and databases become a superhuman being—is the most likely to leave room to argue about whether the singularity has happened or not. In this future, there might be all-but-magical scientific breakthroughs. The will of the people might manifest itself as a seamless transformation of demand and imagination into products and policy, with environmental and geopolitical disasters routinely finessed. And yet there might be no explicit evidence of a superhuman player.

A singularity arising from networks of embedded microprocessors—the Digital Gaia Scenario—would probably be less deniable, if only because of the palpable strangeness of the everyday world: reality itself would wake up. Though physical objects need not be individually sapient, most would know what they are, where they are, and be able to communicate with their neighbors (and so potentially with the world). Depending on the mood of the network, the average person might notice a level of convenience that simply looks like marvelously good luck. The Digital Gaia would be something beyond human intelligence, but nothing like human. In general, I suspect that machine/network life-forms will be faster, more labile, and more varied than what we see in biology. Digital Gaia is a hint of how alien the possibilities are.
In his essay, Hanson focuses on the economics of the singularity. As a result, he produces spectacular insights while avoiding much of the distracting weirdness. And yet weirdness necessarily leaks into the latter part of his discussion (even leaving Digital Gaia possibilities aside). AI at the human level would be a revolution in our worldview, but we can already create human-level intelligences; it takes between nine months and 21 years, depending on whom you're talking to. The consequences of creating human-level artificial intelligence would be profound, but it would still be explainable to present-day humans like you and me.

But what happens a year or two after that? The best answer to the question, “Will computers ever be as smart as humans?” is probably “Yes, but only briefly.”

For most of us, the hard part is believing that machines could ever reach parity. If that does happen, then the development of superhuman performance seems very likely—and that is the singularity. In its simplest form, this might be achieved by “running the processor clock faster” on machines that were already at human parity. I call such creatures “weakly superhuman,” since they should be understandable if we had enough time to analyze their behavior. Assuming Moore's Law muddles onward, minds will become steadily smarter. Would economics still be an important driver? Economics arises from limitations on resources. Personally, I think there will always be such limits, if only because Mind's reach will always exceed its grasp. However, what is scarce for the new minds and how they deal with that scarcity will be mostly opaque to us.

The period when economics could help us understand the new minds might last decades, perhaps corresponding to what Brooks describes as “a period, not an event.” I'd characterize such a period as a soft takeoff into the singularity. Toward the end, the world would be seriously strange from the point of view of unenhanced humans.

A soft takeoff might be as gentle as changes that humanity has encountered in the past. But I think a hard takeoff is possible instead: perhaps the transition would be fast. One moment the world is like 2008, perhaps more heavily networked. People are still debating the possibility of the singularity. And then something...happens. I don't mean the accidental construction that Brooks describes. What I'm thinking of would probably be the result of intentional research, perhaps a group exploring the parameter space of their general theory. One of their experiments finally gets things right. The result transforms the world—in just a matter of hours. A hard takeoff into the singularity could resemble a physical explosion more than it does technological progress.

If the singularity happens, the world passes beyond human ken
I base the possibility of hard takeoff partly on the known potential of rapid malcode (remember the Slammer worm?) but also on an analogy: the most recent event of the magnitude of the technological singularity was the rise of humans within the animal kingdom. Early humans could effect change orders of magnitude faster than other animals could. If we succeed in building systems that are similarly advanced beyond us, we might experience a similar incredible runaway.

Whether the takeoff is hard or soft, the world beyond the singularity contains critters who surpass natural humans in just the ability that has so empowered us: intelligence. In human history, there have been a number of radical technological changes: the invention of fire, the development of agriculture, the Industrial Revolution. One might reasonably apply the term singularity to these changes. Each has profoundly transformed our world, with consequences that were largely unimagined beforehand. And yet those consequences could have been explained to earlier humans. But if the transformation discussed in this issue of Spectrum occurs, the world will become intrinsically unintelligible to the likes of us. (And that is why “singularity,” as in “black hole singularity of physics,” is the cool metaphor here.) If the singularity happens, we are no longer the apex of intellect. There will be superhumanly intelligent players, and much of the world will be to their design. Explaining that to one of us would be like trying to explain our world to a monkey.

Both Horgan and Nordmann express indignation that singularity speculation distracts from the many serious, real problems facing society. This is a reasonable position for anyone who considers the singularity to be bogus, but some form of the point should also be considered by less skeptical persons: if the singularity happens, the world passes beyond human ken. So isn't all our singularity chatter a waste of breath? There are reasons, some minor, some perhaps very important, for interest in the singularity. The topic has the same appeal as other great events in natural history (though I am more comfortable with such changes when they are at a paleontological remove). More practically, the notion of the singularity is simply a view of progress that we can use—along with other, competing, views—to interpret ongoing events and revise our local planning. And finally: if we are in a soft takeoff, then powerful components of superintelligence will be available well before any complete entity. Human planning and guidance could help avoid ghastliness, or even help create a world that is too good for us naturals to comprehend.

Horgan concludes that “the singularity is a religious rather than scientific vision.” Brooks is more mellow, seeing “commonalities with religious beliefs” in many enthusiasts' ideas. I argue against Horgan's conclusion, but Brooks's observation is more difficult to dispute. If there were no other points to discuss, then those commonalities would be a powerful part of the skeptics' position. But there are other, more substantive arguments on both sides of the issue.

And of course, the spirituality card can be played against both skeptics and enthusiasts: Consciousness, intelligence, self-awareness, emotion—even their definitions have been debated since forever, by everyone from sophomores to great philosophers. Now, because of our computers, the applications that we are attempting, and the tools we have for observing the behavior of living brains, there is the possibility of making progress with these mysteries. Some of the hardest questions may be ill-posed, but we should see a continuing stream of partial answers and surprises. I expect that many successes will still be met by reasonable criticism of the form “Oh, but that's not really what intelligence is about” or “That method of solution is just an inflexible cheat.” And yet for both skeptics and enthusiasts, this is a remarkable process. For the skeptic, it's a bit like subtractive sculpture, where step-by-step, each partial success is removing more dross, closing in on the ineffable features of Mind—a rather spiritual prospect! Of course, we may remove and remove and find that ultimately we are left with nothing but a pile of sand—and devices that are everything we are, and more. If that is the outcome, then we've got the singularity.

About the Author
VERNOR VINGE, who wraps up this issue, first used the term singularity to refer to the advent of superhuman intelligence while on a panel at the annual conference of the Association for the Advancement of Artificial Intelligence in 1982. Three of his books—A Fire Upon the Deep (1992), A Deepness in the Sky (1999), and Rainbows End (2006)—won the Hugo Award for best science-fiction novel of the year. From 1972 to 2000, Vinge taught math and computer science at San Diego State University.

Incurable American Optimism



Inteviewed by Mark Molaro, American professor and media expert Paul Levinson is talking here about the state, influence and future of the new media.

Levinson is the author of "Digital McLuhan" and "The Soft Edge" and has appeared in countless media venues from PBS to Fox to offer his insight on media issues.

In this video Levinson discusses the current exponential rise of new media and what Marshall McLuhan would think of the digital age we live and now create in.

I wonder if this kind of 100% favorable and enthusiastic discourse about Web 2.0 and new media is possible in Europe.

Watching this video makes me think of Jean Baudrillard's "hyper-information age" and once again I was reminded of Baudrillard's distinction between the American and European way of thinking:

"Vu d'Amérique et par intellectuls américains [Susan Sontag] , le désaveu de la réaité dans les cultures européennes, et singulièrement dans la théorie française, n'est que le dépit "métaphisique" de ne plus être maître de cette réalité, et la manifestation, à la fois arogante et ironique, de cette impuissance.


Et c'est sans doute vrai.


Mais vice versa: c'est parti pris de la réaité, cet "affirmative thinking", n'est-il pas, chez les Américains, l'expression naive et idéologique du fait qu'ils ont, de par leur puissance, la monoplole de la rélaité?


Nus vivons certes dans la nostalgie ridicule de la gloire [de l'hisoire, de la culture], mais eux vivent dans l'illusion ridicule de la performance." Jean Baudrillard, Cool Memories V

http://holychic.blogspot.com/2008/07/incurable-american-optimism.html

Mobile Trends 2008

Mobile 2.0 @ Plugg



Presentation "Mobile 2.0 - what is it and why should you care?" by Rudy De Waele at Plugg Conference in Brussels on March 19, 2008

A deep dive into the future of mobile with Rudy De Waele, one of the world's most renowned mobile strategists, featuring a look at historical and upcoming trends, insights on potential revenue models and the industry's leading protagonists.

Mobile and Wireless Trends for 2008

by Rudy De Waele:

1. Google’s Android and the Open Handset Alliance will definately take off in 2008. While the iPhone is doing probably the best job embracing mobile and web convergence, the Apple OS is still a closed system and used by a rather small market segment of users. Nokia’s Nseries - though all remarkeable devices - didn’t produce any breakthrough Symbian OS changes last year and is still too buggy to go mass-market - I don’t see my sister or father perform a device software update; which leaves the opportunity for Google and the Open Handset Alliance to get the new Linux-based operating system Android on several cutting-edge smartphones before year-end. Mobile OS, a truely competitive space in 2008!

2. The Rise of the Mobile Social Networks. M:Metrics released some promising data mid-2007 on the rise of the Mobile Social Networks. With the big social media networks all going mobile in 2007 (Facebook, MySpace, YouTube and Bebo, …), this trend will continue to rise in 2008, sustained by more flat rate introductions on different markets.

3. Apple will be seriously attacked by the music industry on its own, once disruptive, iTunes business model. 2008 will be the year of further downfall of DRM and the raise of watermarked audio-files. With Sony BMG planning to drop DRM - the last of the Big Four record labels with Warner Music Group, Universal Music Group and EMI Music, to throw in the towel on digital rights management. The end of DRM might embolden a host of new, online download venues initiated by the Big Four in its searches for a successful digital strategy. Note also the rise of new business models (!) giving away DRM-free, ad-supported music downloads, like the recently founded Rcrd Lbl by Peter Rojas. Read my DRM Free at Last! for a recent overview and links to previous posts on this topic.

4. Telefonica will introduce the 3G iPhone. To be announced at Mobile World Congress in Barcelona in February?

5. The return of the Location-Based Services. Since Nokia introduced the Nseries N95 with built in GPS, Location-Based Services are becoming exciting again. A new wave of mobile services and applications build on the location of the user (cell-ID and/or GPS) will see the light this year, driven by the open Google Maps API and flickr’s geotagged photo function. Read also my early 2005 coverage on the formerly known MoSoSos.

6. First iPhone competitors coming to market. Nokia will introduce a serious competitor for the iPhone. It has the hardware manufacturing intelligence and knowledge to come up with its own multi-touch screenLucidTouch-Profile Feb-08 interface. Biggest challenge for Nokia (and other manufacturers) will be to keep the OS user-experience as simple as the iPhone. Expect some great innovating devices from HTC too in 2008! (checkout the HTC Touch Dual).

7. Mobile Video Blogging starting to taking off. Though still to be used by early adopters, mobile video blogging tools such as Kyte.tv mobile are already doing a great job with Floobs and KaZiVu also looking very promising (both still in beta), not to forget about YouTube Mobile. All eyes will be on Seesmic however that has the right start-up vibe - instigated daily by its impressive experienced shareholders (and web 2.0 icons) and its very active beta-testers community. Imagining Seesmic to be used on your mobile phone is an easy one, the challenges for Seesmic are to bypass the complex technical issues and delivery of its great idea.

8. Mobile search, as already predicted last year will continue to be one of the most important and most used mobile applications. I keep this one in my list adding that some new players might disrupt the big Search market players, not having figured out the real mobile search issues such as accuracy, context, relevance, latency and the correct display of local and niche results.

9. PRM (Personal Rights Management) and Privacy policies and procedures will be high on the agenda for every entreprise and conscious connected individuals. Already talk of the connected crowds at LeWeb3, opening the Social Graphs might appear cool in your social media community but has to be done right! As a starter, check out Dataportability.org and watch Robert Scoble explaining his recent portability issues with Facebook.

10. Twitter and the breakthrough of the ultimate Mobile Presence Tool. Yes, Twitter is the utlimate mobile presence tool, since it’s the easiest to use (through SMS and mobile web access), and most accurate to stay connected at any time from anywhere… Jaiku has a definately a richer client but Twitter is the most easily integrated into most of your social networks, checkout MoodBlast that can simultaneously update multiple chat clients and web services presence tools. 2008 will also see the rise of lifestreaming apps like Tumblr, surprisingly simple on the web and looks great on your mobile phone.



We live in exponential times


Karl Fisch created this presentation to start conversations with other teachers about the world their students are entering.


It is a collection of facts and figures about globalization, information age and America’s changing position in the world.


Since Karl generously shared the original presentation under a Creative Commons license, there have been several remixes, many YouTube versions, and SlideShare versions.


This is one of the video presentation of SHIFT HAPPENS


Created by Karl Fisch, and modified by Scott McLeod




The idea being that shifts in society are presenting us with some very real issues which must be addressed along with some startling facts.

The facts:

-In the next eight seconds, 32 babies will be born.

-Of the world's 2006 College graduates: 1.3 million came from the U.S., 3.1 million came from India, and 3.3 million came from China.

-Of the 3.1 million graduates from India one hundered percent of them speak English.

-It is estimated that in ten years the worlds largest population of English speakers will be from China

-One in four workers has been with their current employer for less than one full year.

-The United States Department of Labor estimates that todays students will have about 10 jobs by the time they are 38.

-Most of today's college majors did not even exist ten years ago, majors like: New media, Organicly produced Agriculture, e-buisiness, nanotechnology, and Homeland Security.

-Today's 21-year-olds have viewed 20,000 hours of television, played 10,000 hours of video games, talked 10,000 hours talking on the phone, sent 250,000 emails or instant messages, and have created more than %50 of today's internet content.

-70% of today's 4-year-olds have used the internet.

-It took 38 years for the radio to reach a market of 50 million people. It only took the TV 13 years to reach the 50 million mark. But it only took 4 years for the internet to reach the 50 million mark.

-The number of internet devices jumped frm 1000 in 1984 to 600,000,000 in 2006.

-The first commercial text message was sent in 1992, and today the number of text messages sent on any given day exceeds the population of our planet.

-The internet was first used widely in 1995, and in 2006, one in eight married couples met online.

-In this month alone there were 2.7 billion searches performed on the Google search engine.-There are curently more than 540,000 words in the English language, more than 5 times as many as Shakespeare's time.

-Today the amount of new technical information doubles every two years, and by 2010 it is predicted that the new information will double every two days.

-Currently the fiberoptic technology in use can push 10 trillion bits per second down a strand of fiber, this translates into 1,900 compact disks (CDs) or 150 million simultaneous telephone calls a second.

-There are nearly 2 billion children who live in developing countries, one in three of these children do not complete the fifth grade. The One Laptop per Child Project set out to change this by providing laptops for these children.

-It is believed that by the time the childen born in 2007 are six years of age a super computer will have more computational power than he human brain.

-It is predicted that by 2048 there will be a computer that costs $1,000 that will surpass the entire human race in computational power.

What does it all mean?

SHIFT HAPPENS

Today's student's are being trained to perform tasks that don't exist and technologies which we do not yet have, in order to solve the problems we don't know know are problems yet.

The truth of the matter WE LIVE IN EXPONENTIAL TIMES

What does that mean exactly?

It means that our society is growing in exponential leaps and bounds and there are things we must realize to prepare for the future.

Entering the zone of total transparence

This 5 minutes video produced by Italian Web Consulting Agency, Casaleggio Associati is aimed to predict the future of the Internet in 43 years.

Shortly, the Net will include and unify not only the media content but also our private lifes.

And the big winner will be Google. On 2051 we will live an overall Second Life named Prometeus. You will ONLY present your avatar and you will not exist out of Prometeus.

Devices that replicate the five senses will be available in the virtual worlds. We will really feel and live in the Second Life (ooops, Prometeus controlled by Google). BTW, even this blog is owned by Google and probably its collecting more information about me than I'm able to do for myself.

New providence or sharp viral ad? Should we regret that we will not be alive on 2051 when our lifes will be so restrained by overall Google control that the only way to become what you want is not by living your own life but only by living the (marketable!) life of your avatar in the Second Life ?

But on the other hand, it's the only sure wager of our eternity - maybe we will be still alive on 2051 only thanks to Google who are collecting everything about us right now (watch the video on the end of this post).

But first, read the complete text of the video above:

"Man is God. He is everywhere, he is anybody, he knows everything.

This is the Prometeus new world. All started with the Media Revolution, with Internet, at the end of the last century.

Everything related to the old media vanished: Gutenberg, thecopyright, the radio, the television, the publicity.

The old world reacts: more restrictions for the copyright, new laws against non authorized copies. Napster, the music peer to peer company is sued.

At the same time, free internet radio appears; TIVO, the internet television, allows to avoid publicity; the Wall Street Journal goes on line; Google launches Google news.

Millions of people read daily the biggest on line newspaper.

OhMyNews written by thousands of journalists;

Flickr becomes the biggest repository in the history of photos, YouTube for movies.

The power of the masses.

A new figure emerges: the prosumer, a producer and a consumer of information. Anyone can be a prosumer.

The news channels become available on Internet. The blogs become more influential than the old media. The newspapers are released for free.

Wikipedia is the most complete encyclopedia ever.

In 2007 Life magazine closes (sic!) The NYT sells its television and declares that the future is digital. BBC follows. In the main cities of the world people are connected for free. At the corners of the streets totems print pages from blogs and digital magazines.

The virtual worlds are common places on the Internet for millions of people.A person can have multiple on line identities. Second Life launches the vocal avatar.

The old media fight back. A tax is added on any screen; newspapers, radios and televisions are financed by the State; illegal download from the web is punished with years of jail.

Around 2011 the tipping point is reached: the publicity investments are done on the Net. The electronic paper is a mass product: anyone can read anything on plastic paper.

In 2015 newspapers and broadcasting television disappear, digital terrestrial is abandoned, the radio goes on the Internet. The media arena is less and less populated. Only the Tyrannosaurus Rex survives.

The Net includes and unifies all the content.

Google buys Microsoft. Amazon buys Yahoo! and become the world universal content leaders with BBC, CNN and CCTV.

The concept of static information - books, articles, images - changes and is transformed into knowledge flow.

The publicity is chosen by the content creators, by the authors and becomes information, comparison, experience.

In 2020 Lawrence Lessig, the author of 'Free Culture', is the new US Secretary of Justice and declares the copyright illegal.

Devices that replicate the five senses are available in the virtual worlds. The reality could be replicated in Second Life.

Any one has an Agav (agent-avatar) that finds information, people, places in the virtual worlds.

In 2022 Google launches Prometeus, the Agav standard interface.

Amazon creates Place, a company that replicates reality.

You can be on Mars, at the battle of Waterloo, at the Super Bowl as a person. It's real.

In 2027 Second Life evolves into Spirit. People become who they want. And share the memory. The experiences. The feelings. Memory selling becomes a normal trading.

In 2050 Prometeus buys Place and Spirit. Virtual life is the biggest market on the planet. Prometeus finances all the space missions to find new worlds for its customers: the terrestrial avatar.

Experience is the new reality."

Voice: Philip K. Dick Avatar !

Watch This Short Movie About The Power Of Google