In the news:
TechFest gives glimpse of Microsoft’s future
The annual demonstration by company researchers gives a broad view of three of Microsoft’s future focuses: natural user interface, big data and machine learning.
Seattle Times technology reporter
From a smartphone app capable of capturing 3-D scans to interactive whiteboards to a browser-based program allowing users to build a predictive model in minutes, the preview Tuesday of Microsoft’s TechFest 2013 was full of cool stuff.
But the demos were more than just about cool. Taken together, they gave a broad yet cohesive view of three areas of the future that Microsoft is concentrating on:
• Natural user interface — meaning interacting with computing devices using touch, speech or gestures.
• Big data — synthesizing and making useful large amounts of information.
• Machine learning — the ability of computers to learn.
TechFest is the company’s annual science fair at which its advanced researchers from around the world demonstrate what they’re working on.
On Tuesday, a handful of the approximately 150 demonstrations were shown to some customers, partners and the media. On Wednesday and Thursday, thousands of Microsoft employees are expected to attend.
Microsoft employs some 850 Ph.D.-level researchers worldwide — about half in the U.S. — and spends about $9 billion a year on research and development.
That makes Microsoft the No. 1 computer science-research organization in the world, according to Rick Rashid, Microsoft’s chief research officer.
The company, though, has been criticized on whether it gets good return on its heavy R&D investment.
Rashid addressed the issue during his keynote address Tuesday morning in which he said Microsoft Research generates about a quarter of the company’s patents, that it provides “early warning” on new technologies, and that its work has ended up in almost all Microsoft products.
Indeed, a few of the projects on view are expected to be included in some upcoming Microsoft releases. Other projects were in the prototype stage.
Many of the demonstrations featured work on natural user interfaces, especially those allowing a user to interact with a big display screen.
Researcher Michel Pahud, for instance, is working on an interface that allows people to use touch and a pen, at the same time on a digital whiteboard.
A sensor can also detect when a user steps a few feet away from the board, allowing the presenter to use her smartphone as a controller for the display.
Similarly, researcher Bongshin Lee showed off SketchInsight, which allows user to draw simple shapes – an “L” or a circle – onto a digital whiteboard. SketchInsight then uses uploaded data to turn those simple sketches into automatically populated graphs or charts.
The goal, with these as well as other natural user-interface projects, said Microsoft Principal Researcher Bill Buxton, is to get to the point where these technologies come together and become transparent — when users don’t even think of their computing device as one.
For instance, a smartphone, when connected to a car, would become part of the car, allowing the user to do different things than if the smartphone were connected to a large digital display.
Where you use your phone then “fundamentally changes the nature of your phone” so that your smartphone and the screen come “seamlessly together so it’s one device,” Buxton said.
“If you think ‘computer,’ it’s a failure of design.”
Microsoft’s Kinect voice- and motion sensor, as well as 3-D technology, were also big at TechFest this year.
Researchers showed off a new version of Kinect Fusion that will enable 3-D scanning using Kinect. That technology is expected to be included in an upcoming release of the Kinect for Windows software development kit.
Researchers also demonstrated technology to capture 3-D images in motion in real time, a smartphone app that lets people capture 3-D scans, and a way to use Kinect to scan your body to create the basis for 3-D avatars.
Among the demonstrations featuring big data was one that intends to use crowd-sourced audio from smartphones to create a database of how noisy and crowded businesses are in real time.
When people check in to a restaurant, for instance, an app will record audio for eight seconds. By aggregating such recordings, said researcher Dimitrios Lymberopoulos, it will be possible to determine the restaurant’s occupancy, noise, music and chatter level. That could be useful to people searching on Bing for, say, a quiet restaurant for a romantic dinner.
“It’s a new big data set that has not been created before that can help us understand the physical world in real time,” he said.
Several researchers demonstrated projects that visualize big data in ways that allow users to see both the big picture and to zoom in on detailed, minute aspects of the data.
Others showed projects allowing people to build predictive models in minutes.
Among those working on machine learning and natural user interface was researcher Cem Keskin, who is getting the Kinect to read hand gestures. Kinect currently reads larger, skeletal motions.
Keskin is training Kinect to be able to read more detailed gestures, including closing a person’s fists and bringing them together or apart to zoom in or out, or moving a cursor on a screen with open hands.
That technology will be included in the upcoming Kinect for Windows software-development kit release.
“All these technologies are coming to maturity — both at Microsoft and in the industry at large,” said Steve Clayton, who writes about Microsoft Research for the company.
Janet I. Tu: 206-464-2272 or email@example.com. On Twitter @janettu.