I’m trying to compile a list of major technological and societal trends that influence U.S. computing research. Here’s my initial list. Please post your own suggestions!
- Ubiquitous connectivity, and thus true mobility
- Massive computational capability available to everyone, through the cloud
- Exponentially increasing data volumes – from ubiquitous sensors, from higher-volume sensors (digital imagers everywhere!), and from the creation of all information in digital form – has led to a torrent of data which must be transferred, stored, and mined: “data to knowledge to action”
- Social computing – the way people interact has been transformed; the data we have from and about people is transforming
- All transactions (from purchasing to banking to voting to health) are online, creating the need for dramatic improvements in privacy and security
- Cybercrime
- The end of single-processor performance increases, and thus the need for parallelism to increase performance in operating systems and productivity applications, not just high-end applications; also power issues
- Asymmetric threats, need for surveillance, reconnaissance
- Globalization – of innovation, of consumption, of workforce
- Pressing national and global challenges: climate change, education, energy / sustainability, health care (these replace the cold war)
What’s on your list? Please post below!
[cross-posted from CCC Blog]
Small is better for…
1) access
2) energy consumption
3) versatility
4) logistics
5) ISR, C2, etc
6) flexibility
The movement from “computers” to “appliances,” in which users cannot develop or acquire custom software, except with the manufacturer’s permission, seems to me very important. The iPhone and iPad are the obvious examples, but many others exist (E-book readers, etc.), and will no doubt become a major trend in the next few years.
I think fabrication technology is seriously under-rated because the prices are still coming down, processes are still being developed and standards are still being set. I was looking at the assembly instructions for the Cupcake 3D plastic printer which sells as a kit for under $1,000. It’s still a toy right now, but I remember the NCC back in 1979, over 30 years ago, when I was approached by a guy who was trying to sell a 3D gel surface printer. I wouldn’t be at all surprised to see much more advanced 3D printers and fabricators in shops and towns around the world by 2040. Why should a hardware store stock 1,000 different kinds of screws, nuts and bolts when they can be printed to order? Why should spare parts be stocked at all?
Of course, we won’t call them 3D printers or fabricators or anything like that. I was reading some Doc Smith science fiction from the 30s in which he describes how the good guys had some many bad guys to fight that they had to use keyboard macros to control their weapons systems. Needless to say, he didn’t call them “keyboard macros”, but that’s what they were.
Massive computational capability available to everyone, through the cloud .
Surely this one is back to front – based on 1960’s thinking.
There is far more computational capability in the local devices these days. In fact projects like “Seti at home” work exactly the reverse – massive computational capability available to a central resource – through everyone…
Of course it depends on what exactly you mean by “the cloud”. If you mean a set of big servers in some datacentres somewhere – then the trend towards that is driven by the control freakery of those who never quite got over the idea of the idea of the personal computer. On the other hand the true “cloud” – or maybe you could call it “the mist” where ordinay people share each other’s resources -as in P2P is an entirely different concept.
There’s a lot of interesting work being done here, from high-tech prosthetics to body-computer connections, not to mention computer-driven DNA analysis.
I think a driving factor in research is the ever expanding capabilities of Virtualization and Modeling. Whether modeling a human body for medical research and teaching, virtualizing hostile environments for military training, virtualizing entire societies and economies (i.e. SecondLife), the ability to dispense with a physical implementation is opening up a wealth of possibilities for further study.
I think the emergence of India, China, and other economies is another driver. Apart from providing cheap human labour, it also exerts strong downward pressure on technology items with mass appeal. Cellphones are a defining example.
There is a huge growth in the availability of and demand for mobile location based services (compared to the traditional location based services). I think the growth is due to the availability of more and more powerful smart phones. Yelp, theFind, SnapTell, Googgles, etc. are some examples. Now these phones increasingly interact with physical objects (For example, old days you had to type everything, now you take a snap using camera and expect the phone to do the rest). I think this will be a huge drive to innovate (in both hardware and software) in order to bridge the gap between the physical and logical worlds (i.e. to provider better user experience).
I generally consider asymmetric threats as overrated, if by asymmetric threats we mean David vs. Goliath conflicts in which Goliath is the protagonist–as exemplified by the ‘Global Guerrillas’ website, for example.
I regard as a real threat the threat of informational asymmetry, exemplified by the ubiquity of ‘webstacles,’ or the tendency of commercial websites to dispense data to consumers one data point at a time (while of course prohibiting deep linking), while the data mining industry receives the raw feed on consumer transactions in bulk. No offense intended to the Taiwanese, but I call it TAIWAN–Total Asymmetric Information Warfare Against Nescients. I propose countermeasures.
Increasing availability of broadband, both wired and wireless.
Positioning by itself is useful enough, and marrying it to maps/GIS datasets was the obvious first step. But making positioning info available to PDA/smartphone applications provides a significant degree of additional context for those applications to work from. Expect position data to be a critical element in audio lifelogging (imagine a personal eavesdropper) applications.
I strongly second Anderer Gregor’s comment above that the availability of free and open source software is a driving trend. The ability to quickly and cheaply deploy new infrastructure on commodity hardware allows for much faster growth than before.
I think we shouldn’t forget basic theoretical research as a driving factor. New results and the development of new primitives often open upon new areas of technological growth. For example, the results of public key cryptography were fundamental to the development of secure e-commerce. Or in systems research, the development of overlay networks and DHTs leading to peer-to-peer applications. For more examples, I suppose any of the results honored by the Paris Kanellakis Award would qualify. 🙂
A second driving factor that I think should be included is the growth of targeted medicine through whole and partial genome sequencing. Although that may not be driving technology per se, I would see it as part of the larger goal of many computational biology projects.
The cost of storing PB of data is falling at a predictable rate, and algorithms like MapReduce enable data stored on commodity SATA storage to be analysed without needing high end data warehousing servers, RAID storage etc. Until now, nobody could afford to keep all the data their web sites collected. Not any more -and the same technologies and techniques apply to any other data collected.
Note that once your storage demands start to be measurable in Petabytes, the notion of “cloud computing” changes, as you can’t just rent a couple of servers for an hour. You need machines near the storage, and the only way for people like amazon, yahoo! and facebook is to own their own datacenter. Cloud computing as in “infrastructure on demand” gives smaller groups access to some of the infrastructure, but a different cost model.
There’s another trend here: the growth of the datacenter as the replacement for the mainframe.
The move from product-based to service-based payment, requiring constant cash flow to keep using computing devices (especially mobile ones).
It’s understandable given that the sale of electronic products is becoming more and more of a race to the bottom, but also tragic that people-as-consumers are expected more and more to make long-term commitments to pay out level series of future payments over periods of years for access to connectivity, while people-as-workers are expected more and more to settle for temp and part-time jobs. Happily, prepaid services are catching on, but affordability is still a major component of the digital divide.
You probably dare not notice it, but there is actually a civil cyberwar going on at the moment between 18th century cultural monopolists and 21st century cultural libertarians – or as the other side puts it: between starving artists and pirate scum.
I should add that this affects the research and development of distributed systems (aka p2p, file-sharing facilities), by stigmatising the development of any technology that facilitates the widespread infringement of copyright (and effective circumvention of any obfuscation schemes/DRM/TPMs).
21st century diffusion technologies are anathema to businesses based on 18th century reproduction monopolies.
I have been dwelling on Ed’s post for a while. Crosbie’s post provided a “hook” on how to approach this issue.
1. I have noticed that technology today is being used to lock-customers into a companies product line through the use of proprietary technologies. As one example, we had a Belkin UPS that recognized Windows power management. That UPS was fried in a lightening strike to our house, Belkin replaced it, The “new” UPS, even though it had the same model number, would no longer recognize Windows power management. You would have had to use Belkin’s proprietary software.
2. So-called intellectual property is being claimed (as in land grab) a property right. To enforce this so-called right we have (and are seeing) the development of technologies that will “inspect” whatever we do on the internet and even in our home. So Big Brother is and will be watching us.
It occurred to me that I may need a better analogy to explain what I was stating. When one plugs a lamp into the wall socket and turn the power switch on, the light lights-up (standard technology). But it seems that (intervening proprietary) technology is now being applied in a manner that when you press the power button; the light might not be turned on because it “violates” some rule established by the manufacturer.
Technological convergence, i.e. the trend from “phones” to “mobile computers” means that, while many people started carrying phones around over the past decade, they’re now increasingly also carrying cameras, GPS devices, a web browser, etc… The convergence of these technologies into a single device makes them more ubiquitous
The availability of open source tools and libraries, and of creative commons data, simplifying it for other teams to continue / “fork” research (example: GIZA, MOSES, … in machine translation)
1) You should also include open standards. Yes, there is a lot of jockeying around and attempting to build closed standards, but there is a big market out there and open standards have the advantage that buy in is cheap. This refers to both software and hardware standards, and I think the value of open hardware is under-rated.
2) There is also the Creative Commons and the open information culture. Despite lots of counter-examples, there is a lot of anti-DRM pressure built on the market benefits of open. It was Steve Jobs who, despite his penchant for closed systems, who convinced the music industry to punt on DRM. (Though, it might take the EFF to convince them to give up on suing their best customers.)