Category Archives: Computer Technology

2015 International Technology Roadmap for Semiconductors (ITRS) and Moore’s Law

On 8 July 2016, the Semiconductor Industry Association (SIA) and its international partners announced the release of the 2015 International Technology Roadmap for Semiconductors (ITRS), which it describes as follows:

ITRS is “a collaborative report that surveys the technological challenges and opportunities for the semiconductor industry through 2030. The ITRS seeks to identify future technical obstacles and shortfalls, so the industry and research community can collaborate effectively to overcome them and build the next generation of semiconductors – the enabling technology of modern electronics.”

ITRS report coverYou can download the 2015 ITRS Executive Report and the seven technical sections at the following link:

http://www.semiconductors.org/main/2015_international_technology_roadmap_for_semiconductors_itrs/

Key points from the Executive Report are the following:

  • Economic gains from adopting manufacturing processes and packaging for smaller transistors are decreasing.
    • The magnitude of the investment needed for developing the processes and devices for manufacturing the highest performance chips has reduced the number of top-tier manufacturers (IC foundries) to just four.
  • Advanced manufacturing technologies exist for increasing transistor density, including smaller 2D features and 3D (stacked) features.
    • As features approach 10 nm (nanometers), the IC manufacturers are running out of horizontal space.
    • Flash memory is leading the way in 3D manufacturing to enable higher packing densities.
  • As packing densities continue to increase, new techniques will be needed by about 2024 to ensure adequate heat removal from the highest density chips.
    • At some point, liquid cooling may be required.
  • Companies without fabrication facilities (i.e., Apple) are producing the IC design, which is manufactured by a foundry company (i.e., Samsung manufactures the Apple A6X IC).
  • A new “ecosystem” has evolved in the past decade that is changing the semiconductor industry and blurring the way that performance scaling is measured.
    • Manufacturing advances enable further miniaturization of IC features and the integration of digital system functions (i.e., logic, memory, graphics, and other functionality) in a single die (system-on-a-chip, SOC). This is known as “More Moore” (MM).
    • System integration and packaging advances enable multiple related devices (i.e., power & power management, interfaces with the outside world) to be integrated in a single package along with the IC (system-in-package, SIP). This is known as “More-than-Moore” (MtM).
    • You can see the distinction between MM and MtM in the following diagram from the IRTS white paper, “More-then-Moore,” by Wolfgang Arden, et al., http://www.itrs2.net/uploads/4/9/7/7/49775221/irc-itrs-mtm-v2_3.pdf

 

More than Moore fig 1

The transition to computationally intensive cloud computing enables effective use of “big data”. In contrast, there has been a proliferation of smart, low-power, functionally diverse devices that generate or use instant data, and can be linked to the cloud as part of the Internet of Things (IoT). These different ends of the spectrum (cloud & IoT) create very different demands on the semiconductor industry. They also complicate measurement of industry performance. It’s not just Moore’s Law anymore.

Big data & instant data

 

The 2015 IRTS offers an expanded set of metrics to assess the combined performance of SoC and SIP in delivering higher value systems. This measurement scheme is shown conceptually below (from the same IRTS “More-then-Moore” white paper cited above).

More than Moore fig 2

For another perspective on the 2015 ITRS report, you can read a short article by Sebastian Anthony on the arsTECHNICA website at the following link:

http://arstechnica.com/gadgets/2016/07/itrs-roadmap-2021-moores-law/?mbid=synd_digg&utm_source=howtogeek&utm_medium=email&utm_campaign=newsletter

 

 

 

 

A Newcomer at the Top of the June 2016 TOP500 Ranking of the World’s Supercomputers

The latest TOP500 ranking of the world’s 500 most powerful supercomputers was released on 20 June 2016. Since June 2013, China’s Tianhe-2 supercomputer topped this ranking at 33 petaflops/second. Now there is a new leader, and once again it is a Chinese supercomputer the Sunway TaihuLight.

sunway-taihulightSource: Jack Dongarra, Report on the Sunway TaihuLight System, June 2016

Details are available at the TOP500 website:

https://www.top500.org

On this website, Michael Feldman commented on the new leader in the TOP500 ranking:

“A new Chinese supercomputer, the Sunway TaihuLight, captured the number one spot on the latest TOP500 list of supercomputers released on Monday morning at the ISC High Performance conference (ISC) being held in Frankfurt, Germany.  With a Linpack mark of 93 petaflops, the system outperforms the former TOP500 champ, Tianhe-2, by a factor of three. The machine is powered by a new ShenWei processor and custom interconnect, both of which were developed locally, ending any remaining speculation that China would have to rely on Western technology to compete effectively in the upper echelons of supercomputing.”

Remarkably, the Sunway TaihuLight’s significant performance increase is delivered with lower power consumption than Tihane-2: 15,371 kW for TihauLight vs. 17,808 kW for Tihane-2.

You can read Michael Feldman’s complete article at the following link:

https://www.top500.org/news/china-tops-supercomputer-rankings-with-new-93-petaflop-machine/

You also can read the press release for the new TOP500 listing at the following link:

https://www.top500.org/news/new-chinese-supercomputer-named-worlds-fastest-system-on-latest-top500-list/

You’ll find the list of the top 10 supercomputers at the following link:

https://www.top500.org/lists/2016/06/

From here, you can navigate to the complete listing of all 500 supercomputers by going to the grey box titled RELEASE and selecting The List.

U.S supercomputers Titan and Sequoia are ranked 3rd and 4th, respectively, each with about 17% of the RMAX rating of the Sunway TaihuLight and half the power consumption. In comparison, the Sunway TaihuLight is significantly more power efficient than Titan and Sequoia.

15 July 2016 Update: National Science Foundation (NSF) examines the future directions for NSF advanced computing infrastructure

The NSF recently published the new report entitled, “Future Directions for NSF Advanced Computing Infrastructure to Support U.S. Science and Engineering in 2017-2020.”

NSF adv computing infrastructure report cover   Source: NAP

As described by the authors, this report “offers recommendations aimed at achieving four broad goals: (1) position the U.S. for continued leadership in science and engineering, (2) ensure that resources meet community needs, (3) aid the scientific community in keeping up with the revolution in computing, and (4) sustain the infrastructure for advanced computing.”

The report addresses the TOP500 listing, pointing to several known limitations, and concludes that:

“Nevertheless, the list is an excellent source of historical data, and taken in the aggregate gives insights into investments in advanced computing internationally.”

The NSF report further notes the decline in U.S. ranking in the TOP500 list (see pp. 59 – 60):

“The United States continues to dominate the list, with 45 percent of the aggregate performance across all machines on the July 2015 list, but it has dropped substantially from a peak of over 65 percent in 2008. NSF has had systems either high on the list (e.g., Kraken, Stampede) or comparable to the top systems (i.e., Blue Waters), reflecting the importance of computing at this level to NSF-supported science. Although there are fluctuations across other countries, the loss in performance share across this period is mostly explained by the growth in Asia, with China’s share growing from 1 percent to nearly 14 percent today and Japan growing from 3 to 9 percent.”

The report puts TOP500 rankings in perspective as it addresses future national scale advanced computing needs and operational models for delivering advanced computing services.

If you have a MyNAP account, you can download this report for free from National Academies Press (NAP) at the following link:

http://www.nap.edu/catalog/21886/future-directions-for-nsf-advanced-computing-infrastructure-to-support-us-science-and-engineering-in-2017-2020

 

 

 

Stunning Ultra High Resolution Images From the Google Art Camera

The Google Cultural Institute created the ultra high resolution Art Camera as a tool for capturing extraordinary digital images of two-dimensional artwork. The Institute states:

 “Working with museums around the world, Google has used its Art Camera system to capture the finest details of artworks from their collection.”

A short video at the following link provides a brief introduction to the Art Camera.

https://www.youtube.com/watch?v=dOrJesw5ET8

The Art Camera simplifies and speeds up the process of capturing ultra high resolution digital images, enabling a 1 meter square (39.4 inch square) piece of flat art to be imaged in about 30 minutes. Previously, this task took about a day using third-party scanning equipment.

The Art Camera is set up in front of the artwork to be digitized, the edges of the image to be captured are identified for the Art Camera, and then the camera proceeds automatically, taking ultra high-resolution photos across the entire surface within the identified edges. The resulting set of digital photos are processed by Google and converted into a single gigapixel file.

Google has built 20 Art Cameras and is lending them out to institutions around the world at no cost to assist in capturing digital images of important art collections.

You can see many examples of artwork images captured by the Art Camera at the following link:

https://www.google.com/culturalinstitute/project/art-camera

Among the images on this site is the very detailed Chinese ink and color on silk image shown below. The original image measures about 39 x 31 cm (15 x 12 inches). The first image below is of the entire scene. Following are two images that show the higher resolution available as you zoom in on the dragon’s head and reveal the fine details of the original image, including the weave in the silk fabric.

Google cultural Institute image

GCI image detail 1

GCI image detail 2

Image credit, three images above: Google Cultural Institute/The Nelson-Atkins Museum of Art

In the following pointillist painting by Camille Pissarro, entitled Apple Harvest, the complex details of the artist’s brush strokes and points of paint become evident as you zoom in and explore the image. The original image measures about 74 x 61 cm (29 x 24 inches).

Pissaro Apple Harvest

Pissaro image detail 1

Pissaro image detail 2

Image credit, three images above: Google Cultural Institute/Dallas Museum of Art

Hopefully, art museums and galleries around the world will take advantage of Google’s Art Camera or similar technologies to capture and present their art collections to the world in this rich digital format.

Rise of the Babel Fish

In Douglas Adams’ 1978 BBC radio series and 1979 novel, “The Hitchhiker’s Guide to the Galaxy,” we were introduced to the small, yellow, leach-like Babel fish, which feeds on brain wave energy.

Babel fishSource: http://imgur.com/CZgjO

Adams stated that, “The practical upshot of all this is that if you stick a Babel fish in your ear you can instantly understand anything in any form of language.”

In Gene Roddenberry’s original Star Trek series, a less compact, but, thankfully, inorganic, universal translator served Captain Kirk and the Enterprise crew well in their many encounters with alien life forms in the mid 2260s. You can see a hand-held version (looking a bit like a light saber) in the following photo from the 1967 episode, “Metamorphosis.”

Universal translatorSource: http://visiblesuns.blogspot.com/2014/01/star-trek-metamorphosis.html

A miniaturized universal translator built into each crewmember’s personal communicator soon replaced this version of the universal translator.

At the rate that machine translation technology is advancing here on Earth, its clear that we won’t have to wait very long for our own consumer-grade, portable, “semi-universal” translator that can deliver real-time audio translations of conversations in different languages.

Following is a brief overview of current machine translation tools:

BabelFish

If you just want a free on-line machine translation service, check out my old favorite, BabelFish, originally from SYSTRAN (1999), then Alta Vista (2003), then Yahoo (2003 – 2008), and today at the following link:

https://www.babelfish.com

With this tool, you can do the following:

  • Translate any language into any one of 75 supported languages
  • Translate entire web pages and blogs
  • Translate full document formats such as Word, PDF and text

When I first was using BabelFish more than a decade ago, I often was surprised by the results of a reverse translation of the text I had just translated into Russian or French.

While BabelFish doesn’t support real-time, bilingual voice translations, it was an important, early machine translation engine that has evolved into a more capable, modern translation tool.

Google Translate

This is a machine translation service / application that you can access at the following link:

https://translate.google.com

Google Translate also is available as an IPhone or Android app and currently can translate text back and forth between any two of 92 languages.

Google Translate has several other very useful modes of operation, including, translating text appearing in an image, translating speech, and translating bilingual conversations.

  • Translate image: You can translate text in images—either in a picture you’ve taken or imported, or just by pointing your camera.
  • Translate speech: You can translate words or phrases by speaking. In some languages, you’ll also hear your translation spoken back to you.
  • Translate bilingual conversation: You can use the app to talk with someone in a different language. You can designate the language or the Translate app will recognize which language is being spoken, thereby allowing you have a (more-or-less) natural conversation.

In a May 2014 paper by Haiying Li, Arthur C. Graesser and Zhiqiang Cai, entitled, “Comparison of Google Translation with Human Translation,” the authors investigated the accuracy of Google Chinese-to-English translations from the perspectives of formality and cohesion. The authors offered the following findings:

“…..it is possible to make a conclusion that Google translation is close to human translation at the semantic and pragmatic levels. However, at the syntactic level or the grammatical level, it needs improving. In other words, Google translation yields a decipherable and readable translation even if grammatical errors occur. Google translation provides a means for people who need a quick translation to acquire information. Thus, computers provide a fairly good performance at translating individual words and phrases, as well as more global cohesion, but not at translating complex sentences. “

You can read the complete paper at the following link:

https://www.aaai.org/ocs/index.php/FLAIRS/FLAIRS14/paper/viewFile/7864/7823

A December 2014 article by Sumant Patil and Patrick Davies, entitled, “Use of Google Translate in Medical Communication: Evaluation of Accuracy,” also pointed to current limitations in using machine translations. The authors examined the accuracy of translating 10 common medical phrases into 26 languages (8 Western European, 5 Eastern European, 11 Asian, and 2 African) and reported the following:

“Google Translate has only 57.7% accuracy when used for medical phrase translations and should not be trusted for important medical communications. However, it still remains the most easily available and free initial mode of communication between a doctor and patient when language is a barrier. Although caution is needed when life saving or legal communications are necessary, it can be a useful adjunct to human translation services when these are not available.”

The authors noted that translation accuracy depended on the language, with Swahili scoring lowest with only 10% correct, and Portuguese scoring highest at 90%.

You can read this article at the following link:

http://www.bmj.com/content/349/bmj.g7392

ImTranslator

ImTranslator, by Smart Link Corporation, is another machine translation service / tool, which you can find at the following link:

http://imtranslator.net

ImTranslator uses several machine translation engines, including Google Translate, Microsoft Translator, and Babylon Translator. One mode of ImTranslator operation is called, “Translate and Speak”, which delivers the following functionality:

“….translates texts from 52 languages into 10 voice-supported languages. This … tool is smart enough to detect the language of the text submitted for translation, translate into voice, modify the speed of the voice, and even create an audio link to send a voiced message.”

I’ve done a few basic tests with Translate and Speak and found that it works well with simple sentences.

In conclusion

Machine translation has advanced tremendously over the past decade and improved translation engines are the key for making a universal translator a reality. Coupled with cloud-based resources and powerful smart phone apps, Google Translate is able to deliver an “initial operating capability” (IOC) for a consumer-grade, real-time, bilingual voice translator.

This technology is out of the lab, rapidly improving based on broad experience from performing billions of translations, and seeking commercial applications. Surely in the next decade, we’ll be listening through our ear buds and understanding spoken foreign languages with good accuracy in multi-lingual environments. Making this capability “universal” (at least on Earth) will be a challenge for the developers, but a decade is a long time in this type of technology business.

There may be a downside to the widespread use of real-time universal translation devices. In “The Hitchhiker’s Guide to the Galaxy,” Douglas Adams noted:

“…..the poor Babel fish, by effectively removing all barriers to communication between different races and cultures, has caused more and bloodier wars than anything else in the history of creation.”

Perhaps foreseeing this possibility, Google Translate includes an “offensive word filter” that doesn’t allow you to translate offensive words by speaking. As you might guess, the app has a menu setting that allows the user to turn off the offensive word filter. Trusting that people always will think before speaking into their unfiltered universal translators may be wishful thinking.

19 May 2016 Update:

Thanks to Teresa Marshall  for bringing to my attention the in-ear, real-time translation device named Pilot, which was developed by the U.S. firm Waverly Labs. For all appearances, Pilot is almost an electronic incarnation of the organic Babel Fish. The initial version of Pilot uses two Bluetooth earbuds (one for you, and one for the person you’re talking to in a different language) and an app that runs locally on your smartphone without requiring web access. The app mediates the conversation in real-time (with a slight processing delay), enabling each user to hear the conversation in their chosen language.

real-time-translator-ear-waverly-labs-3Photo credit: Waverly Labs

As you might guess, the initial version of Pilot will work with the popular Romance languages (i.e., French, Spanish, etc.), with a broader language handling capability coming in later releases.

Check out the short video from Waverly Labs at the following link:

https://www.youtube.com/watch?v=VO-naxKNuzQ

I can imagine that Waverly Labs will develop the capability for the Pilot app to listen to a nearby conversation and provide a translation to one or more users on paired Bluetooth earbuds. This would be a useful tool for international travelers (i.e., on a museum tour in a foreign language) and spies.

You can find more information on Waverly Labs at the following link:

http://www.waverlylabs.com

Developing the more advanced technology to provide real-time translations in a noisy crowd with multiple, overlapping speakers will take more time, but at the rate that real-time translation technology is developing, we may be surprised by how quickly advanced translation products enter the market.

 

Compact, Mobile 3D Scanning Systems that can Render a Complete, Editable 3D Model in Minutes

The Cubify (cubify.com) iSense 3D scanner is a high-resolution, infra red depth sensor that clips onto an iPad, uses the iPad camera, accelerometer and gyroscopes to understand its orientation relative to the subject, and with the all-important 3D scanning application running on the iPad, creates a scale 3D model of the subject.

iSenseSource: Cubify

I first saw the iSense 3D scanner demonstrated in July 2015 at Comic-Con San Diego. With the scanner attached to the back of an iPad, the person conducting the demonstration selected a subject to be scanned and then walked around that person at a distance of about three feet while monitoring the real-time scan progress on the iPad screen. In about 90 seconds the scan around the subject was complete and it took about another 90 seconds for the software to render the 3D model (a “point cloud”) of the subject’s head. Since this was a quick demonstration, there were a couple of small voids in the 3D model (i.e., under the chin and nose where the scanner didn’t “see”), but otherwise, the resulting model was an accurate scale representation of the subject. What was even more remarkable was that this process was done using the computing power of a current-generation iPad. The resulting color 3D model could be processed further (i.e., to create a mesh model) or sent for printing to a local 3D printer or a printing service accessed via the internet. A version of iSense for the iPhone also is available.

You can read the technical specifications for iSense at the following link:

http://cubify.com/products/isense

A similar scanner with greater capabilities is the Structure Sensor from Occipital Labs. The Structure Sensor operates over greater distances than the iSense and appears to be intended to support a greater range applications, including the following:

  • Capture dense 3D models of objects
    • When used as a 3D scanner, Structure Sensor allows you to capture dense geometry in real-time and create high-fidelity 3D models with high-resolution textures.
    • The resulting model can be sent to a printer for manufacturing, or used in connection with a simulation tool to model the real world physics behavior of the object.
    • The Structure Sensor uses the iPad’s color camera to add high-quality color textures to the 3D model captured.
  • Measure entire rooms all at once.
    • 3D depth sensing enables the rapid capture of accurate, dimensions of objects and environments.
    • Structure Sensor captures everything in view, all at once.
    • Software simplifies large-scale reconstruction tasks
  • Unlock the power of real-time occlusion and physics
    • Once objects or whole environments have been captured by Structure Sensor, the resulting model constitutes a virtual environment with specified physical properties. Other virtual objects can interact with this model based on the assigned physical properties (i.e., bounce off surfaces, move under tables or behind structures, etc.)
    • Virtual environments can be rapidly developed and integrated seamlessly with games or simulations

You can find more information on the Structure Sensor at the following link:

http://structure.io

If you are curious about this type of scanning technology, there are several demonstrations available on YouTube. If you are willing to spend 21 minutes to watch a detailed test of the Structure Sensor, I recommend the 9 December 2014, “Tested In-Depth: Structure Sensor 3D Scanner,” by Will and Norm, which you can view at the following link:

https://www.youtube.com/watch?v=mnOzzbl0Uqw

Here are a few screen shots from Will & Norm’s scanning demonstration. During the scan, the white areas represent areas that have been successfully scanned.

Structure Sensor scan 1

The complete point cloud model is shown below. This model can be rotated and viewed from any angle.

Structure Sensor scan 2

The rendered model, with colors and textures captured by the iPad’s camera, is shown below.

Structure Sensor scan 3

So, at Uncle Joe’s 90th birthday party, get out your iPad with an iSense or Structure Sensor, capture Uncle Joe in 3D, and print a bust of Uncle Joe to commemorate the occasion. If you’re more ambitious, you can capture the whole room with a Structure Sensor and build a game or simulation into this virtual environment.

Recent reviews posted online indicate that this type of 3D scanning is not yet mature and it may be difficult to get repeatable good results. Nonetheless, it will be interesting to see the creative applications of this scanning technology that emerge in the future.

Using Light Instead of Radio Frequencies, Li-Fi has the Potential to Replace Wi-Fi for High-speed Local Network Communications

Professor Harald Haas (University of Edinburgh) invented Li-Fi wireless technology, which is functionally similar to radio-frequency Wi-Fi but uses visible light to communicate at high speed among devices in a network. Professor Hass is the founder of the company PureLiFi (http://purelifi.com), which is working to commercialize this technology. The following diagram from PureLiFi explains how Li-Fi technology works.

Li-Fi-How_VLC_works

A special (smart) LED (light-emitting diode) light bulb capable of modulating the output light, and a photoreceptor connected to the end-user’s device are required.

You can see Professor Hass’ presentation on Li-Fi technology on the TED website at the following link:

http://www.ted.com/talks/harald_haas_wireless_data_from_every_light_bulb?language=en#t-233169

Key differences between Li-Fi and Wi-Fi include:

  • Li-Fi is implemented via a smart LED light bulb that includes a microchip for handling the local data communications function. Many LED light bulbs can be integrated into a broader network with many devices.
    • Light bulbs are everywhere, opening the possibility for large Li-Fi networks integrated with modernized lighting systems.
  • Li-Fi offers significantly higher data transfer rates than Wi-Fi.
    • In an industrial environment, Estonian startup firm Velmenni has demonstrated 1 GBps (gigabits per second). Under laboratory conditions, rates up to 224 gigabits/sec have been achieved.
  • Li-Fi requires line-of-sight communications between the smart LED light bulb and the device using Li-Fi network services.
    • While this imposes limitations on the application of Li-Fi technology, it greatly reduces the potential for network interference among devices.
  • Li-Fi may be usable in environments where Wi-Fi is not an acceptable alternative.
    • Some hazardous gas and explosive handling environments
    • Commercial passenger aircraft when current wireless devices must be in “airplane mode” with Wi-Fi OFF.
    • Some classified / high-security facilities
  • Li-Fi cannot be used in some environments where Wi-Fi can be successfully employed.
    • Bright sunlight areas or other areas with bright ambient lighting

You can see a video with a simple Li-Fi demonstration using a Velmenni Jugnu smart LED light bulb and a smartphone at the following link:

http://velmenni.com

Velmenni smart LED

The radio frequency spectrum for Wi-Fi is crowded and will only get worse in the future. A big benefit of Li-Fi technology is that it does not compete for any part of the spectrum used by Wi-Fi.

 

A Neural Algorithm of Artistic Style

Authors Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge published the subject research paper on 26 Aug 2015 to, “Introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality.”

Convolutional Neural Networks are a class of Deep Neural Network that is very powerful and well suited for image processing tasks. Common usage is in object and facial recognition systems. The authors explain how their neural algorithm works in a Convolutional Neural Network to independently capture content and style in a composite image that represents the content of an original image in a style derived from an arbitrarily selected second image. The authors state that: “The key finding of this paper is that the representations of content and style in the Convolutional Neural Network are separable. That is, we can manipulate both representations independently to produce new, perceptually meaningful images.”

In their paper, the authors selected the following photo to define the image content.

Neural net pic 1

Two examples of the image selected to define the style, and the resulting final image created by the neural algorithm are shown below.

Style derived from The Starry Night by Vincent van Gogh, 1889.

Neural net pic2

Style derived from Der Schrei by Edvard Munch, 1893

Neural net pic3

I find these results to be simply amazing in terms of their artistic composition and their effective implementation of the selected style.

It probably is premature, but I hope there soon will be a reasonably priced app for this to runs on a Mac or PC. I would buy that app in a heartbeat.

You can download the full paper, which includes all of the examples shown above, from the Cornell University Library at the following link:

http://arxiv.org/abs/1508.06576

 

 

Dr. Seuss Explains Why Computers Sometimes Crash

Thanks to Dave Groce for bringing the following  bit of Dr. Seuss wisdom to our attention. You also can find it the following link:

http://bomb-diggity.com/dr_seuss.htm

Dr Seuss & computers  Source: bob-diggity.com

If a packet hits a pocket on a socket on a port,

And the bus is interrupted at a very last resort,

And the access of the memory makes your floppy disk abort,

Then the socket packet pocket has an error to report.

If your cursor finds a menu item followed by a dash,

and the double-clicking icon puts your window in the trash;

and your data is corrupted cuz the index doesn’t hash,

then your situation’s hopeless and your system’s gonna crash!

If the label on the cable on the table at your house

Says the network is connected to the button on your mouse,

But your packets want to tunnel to another protocol,

That’s repeatedly rejected by the printer down the hall,

And your screen is all distorted by the side effects of gauss,

So your icons in the window are as wavy as a souse;

Then you may as well reboot and go out with a bang,

‘Cuz sure as I’m a poet, the sucker’s gonna hang!

When the copy of your floppy’s getting sloppy in the disk

And the microcode instructions cause unnecessary risk,

Then you’ll have to flash the memory and you’ll want to RAM your ROM.

Quickly turn off the computer and be sure to tell your Mom.

Will Your Job Be Done By A Machine?

In September 2013, University of Oxford researchers Carl Benedikt Frey and Michael Osborne published a paper entitled, “The Future of Employment: How Susceptible are Jobs to Computerization?”. In this paper, they estimated that 47% of total U.S. jobs have a high probability of being automated and replaced by computers by 2033. Their key results are summarized in the following graphic.

Frey & Osborn key results-2013 paper

You can download their paper for free at the following link:

http://www.futuretech.ox.ac.uk/sites/futuretech.ox.ac.uk/files/The_Future_of_Employment_OMS_Working_Paper_0.pdf

On 5 Feb 2015, Fortune published an article entitled, “5 white-collar jobs robots already have taken.”  This article identifies the affected jobs as:

  • Financial and sports reporters
  • Online marketers
  • Anesthesiologists, surgeons, and diagnosticians
  • E-discovery lawyers and law firm associates
  • Financial analysts and advisors

You can read the complete article at the following link:

http://fortune.com/2015/02/25/5-jobs-that-robots-already-are-taking/

On 21 May 2015, NPR posted an interesting interactive article that provides rough estimates of the likelihood that particular jobs will become automated in the future. The ranking is based on the following factors:

  • Do you need to come up with clever solutions?
  • Are you required to personally help others?
  • Does your job require you to squeeze into small spaces?
  • Does your job require negotiation?

You can try out this interactive site at the following link:

http://www.npr.org/sections/money/2015/05/21/408234543/will-your-job-be-done-by-a-machine?utm_source=howtogeek&utm_medium=email&utm_campaign=newsletter

There is no opportunity to select many technical professions in science or engineering. Nonetheless, the results for the jobs you can select are insightful. Here are a few example screenshots from the above NPR link:

College professor automation

Aircraft mechanic automation.

Bookkeeper automation

Choosing a career is always a complicated process, but these recent studies clearly show that some careers will be marginalized by automation in the relatively near future.

Update on Supercomputer Performance and Development

The TOP500 project was launched in 1993 to implement an improved statistical process for benchmarking the performance of large general purpose computer systems and maintain a list of the 500 most powerful general purpose computer systems in the world based on benchmark test results. The TOP500 website is at:

http://www.top500.org

The TOP500 list ranks computers by their performance on a LINPAC Benchmark test to solve a dense system of linear equations. While this performance metric does not reflect overall performance of a given system, the systematic application of this benchmark test provides a good measure of peak performance and enables a meaningful relative ranking.

The TOP500 list is updated in June and November each year. Tianhe-2 (Milky Way), a supercomputer developed by China’s National University of Defense Technology has maintained the top position in four consecutive TOP500 lists with a performance of 33.86 petaflop/s (quadrillions of calculations per second), using 17.8 MW (megawatts) of electric power. The growth in supercomputer performance over the past 20 years is shown in the following chart:

TOP500 Supercomputer Chart Source: TOP500

You can access the November 2014 TOP500 list at the following link:

http://www.top500.org/list/2014/11/

On 9 April 2015, the U.S. Department of Energy announced a $200 million investment to deliver a next-generation U.S. supercomputer, known as Aurora, to the Argonne Leadership Computing Facility (ALCF) near Chicago. Read the DOE announcement at the following link:

http://energy.gov/articles/us-department-energy-awards-200-million-next-generation-supercomputer-argonne-national

Intel will work with Cray Inc. as the Aurora system integrator sub-contracted to provide its scalable system expertise together with its proven supercomputing technology and the HPC (Hewlett Packard) software stack. Aurora will be based on a next-generation Cray supercomputer, code-named “Shasta,” a follow-on to the Cray® XC™ series. Aurora is expected to have a peak performance of 180 petaflop/s. When commissioned in 2018, this supercomputer will be open to all scientific users.

Argonne and Intel will also provide an interim system, called Theta, to be delivered in 2016, which will help ALCF users transition their applications to the new technology to be used in Aurora.

DOE earlier announced a $325 million investment to build new, state-of-the-art supercomputers at its Oak Ridge and Lawrence Livermore laboratories.