Courtesy Olivia Solon, wired.co.uk
Earlier this year, Wired.co.uk wrote about Google’s invention of a smart contact lens that could monitor blood glucose levels through tear fluid. Now, the tech giant has invented another pair of lenses with an in-built camera.
The lenses were developed in the Google X laband were featured in a patent filing dating from 2012, which was recently published by the US Patent and Trademark Office. The patent filing features a contact lens that includes an embedded circuit, camera and sensor. The control circuit could be linked wirelessly or via a wire to the camera and sensor. The sensor could be a light sensor, pressure sensor, temperature sensor or electrical field sensor, which may allow for people to gain a "sixth sense" of sorts.
While the project might seem a bit "out there", the technology isn’t all that far off — smart contact lenses with displays have already been tested in labs, although they’ve been a little clunky up until now. One of the key benefits of having a camera embedded in a contact lens rather than attached to the side of the head like Google Glass is that the camera frame would follow a person’s precise gaze without obstructing their view (by being placed along the edge of the lens, away from the pupil).
In the patent filing — as described in great detail over at Patent Bolt — Google points out that the lens could take raw image from a contact lens, process it and relay what it sees to a blind wearer via a different sense — perhaps an audio warning that there is a car approaching a junction, for example. There may also be the option of go-go-gadget eyes that have a zoom capability.
If these contact lenses ever do come to market, it means you can leapfrog the Glasshole stage and go straight to Lenshole. Or whatever the neologism for that will be. In the meantime you can, for one day only, join the Glass Explorer programme today (15 April).
French workers already have a 35-hour work week, five vacation weeks a year, and now, some aren’t allowed to be contacted by their employer after 6:00pm.
A new deal has been agreed upon between tech industry workers and unions in France that no longer requires employees to answer work-related emails after 6 p.m., according to reports. The deal reportedly includes one million workers in digital and consultancy sectors in the country. Tech giants Google (GOOG) and Facebook (FB) have operations in France.
This deal means workers won’t be under any pressure to respond to their higher-ups post-clocking out, and they can’t be reprimanded in the highly-unionized country.
Pam Villarreal, U.S. labor expert at the National Center for Policy Analysis, calls the deals “absurd.”
“Within the tech industry and digital consultancy sectors, there’s always something going wrong off the clock—when a computer goes down, it doesn’t go down between 8 a.m. and 5 p.m.” she says. “Even though workers overwhelmingly support this, it will be interesting to see how it pans out in terms of productivity—knowing people who work in the tech industry, it’s one of the most likely where something goes wrong after hours.”
She adds the deal could bring the unintended consequence of higher labor costs in the country’s industry. “They may have to shift workers to after 5 p.m. to deal with these issues, so it may drive up the cost of labor.”
French tech workers aren’t the only ones getting a reprieve from the stress of a never-ending workday, as a Swedish city is experimenting with a six-hour workday, in an effort to improve productivity andhappiness among workers.
France also famously shortened its workweek to 35 hours instead of the standard 40, in 2000.
“They are likely trying to reduce the unemployment rate, which is at 8%, and almost 24% for those under 25,” Villarreal says of Sweden. “But we don’t know if the experiment will work—people will be trying to cram 8 hours of work into a 6-hour day.”
Villarreal says Swedish companies may also turn to outsourcing if labor costs climb, explain that France’s labor costs increased as a result of its shortened work week.
Neither move is likely to hit the U.S. anytime soon, Villarreal says, especially the no contact after work agreement.
“Think of big tech corporations—they wouldn’t agree to not contacting their employees after 6,” she says. “Google and Facebook are [probably] not so welcoming of this change in France. If it happened here, companies would probably try to outsource to India or China, countries without such strict labor regulations.”
Courtesy Allen McDuffee, Wired.com
The U.S. Navy is tapping the power of the Force to wage war.
Its latest weapon is an electromagnetic railgun launcher. It uses a form of electromagnetic energy known as the Lorentz force to hurl a 23-pound projectile at speeds exceeding Mach 7. Engineers already have tested this futuristic weapon on land, and the Navy plans to begin sea trials aboard aJoint High Speed Vessel Millinocket in 2016.
“The electromagnetic railgun represents an incredible new offensive capability for the U.S. Navy,” Rear Adm. Bryant Fuller, the Navy’s chief engineer, said in a statement. “This capability will allow us to effectively counter a wide range of threats at a relatively low cost, while keeping our ships and sailors safer by removing the need to carry as many high-explosive weapons.”
The massive railgun that needs just one sailor to operate it relies on the electromagnetic energy of theLorentz force—the combination of electric and magnetic forces on a point charge—for power.
The Navy likes the weapon for several reasons, not the least of which it has a range of 100 miles and doesn’t require explosive warheads. That makes it far safer for sailors, and cheaper for taxpayers. According to the Navy, each 18-inch projectile costs about $25,000, compared to $500,000 to $1.5 million for conventional missiles.
“[It] will give our adversaries a huge moment of pause to go: ‘Do I even want to go engage a naval ship?’” Rear Admiral Matt Klunder told reporters. “Because you are going to lose. You could throw anything at us, frankly, and the fact that we now can shoot a number of these rounds at a very affordable cost, it’s my opinion that they don’t win.”
The Navy’s been talking about using railguns for the past ten years. The Office of Naval Research launched a prototype program in 2005, with an initial investment of $250 million committed through 2011. The Navy anticipates spending about that much more by 2017.
Of course the Army is interested in having one too, and the Pentagon is in general interested in many aspects of the technology. In July, the Navy will display the electromagnetic railgun prototype at San Diego Naval Base.
“Frankly, we think it might be the right time for them to know what we’ve been doing behind closed doors in a Star Wars fashion,” said Klunder. “It’s now reality. It’s not science fiction. It’s real and you can look at it.”
Courtesy KLINT FINLEY, Wired.com
n the Iron Man movies, Tony Stark uses a voice-controlled computer assistant called J.A.R.V.I.S. It manages the lights and security system in his home, helps him pilot his Iron Man suits, and even assists with his research. Some of this is still very much in the realm of science fiction, but not all of it. Inspired by the Iron Man movies, two Princeton students have built a J.A.R.V.I.S. for the real world.
“That was even the initial project name — ‘J.A.R.V.I.S.’ — until we decided that it was too unoriginal,” says one of the project’s creators, Charles Marsh. Instead, they now call it Jasper.
It’s like Siri, but instead of running on your smartphone, it operates from a small, stand-alone unit with a microphone and an internet connection
No, Jasper isn’t as sophisticated as its science fictional inspiration. It’s more like Siri or Google Now, but instead of running on your smartphone, it operates from a small, stand-alone unit with a microphone and an internet connection. And it’s open source, meaning anyone can take the designs and build their own and modify it as need be.
Jasper acts as an “always on” system. When you say its name, it will respond with a beep indicating that’s it’s ready for instructions. So far, it can do things like tell you whether you have new Facebook notifications or Gmail messages, play songs from Spotify and, of course, tell you what the weather is like. It also offers a developer interface that lets outside programmers add new tools that can be triggered by additional keywords.
Marsh built the tool alongside a fellow Princeton student named Shubhro Saha. Most of the development happened over the summer, while Marsh was interning at Microsoft in Seattle and Saha at an online advertising outfit AppNexus in New York City. “Every night after work, we’d hold a Google Hangout to discuss design decisions, bugs, TODOs, and everything else we needed to get done,” Marsh remembers.
But like most open source projects, Jasper stands on the shoulders of existing open source code. Much of the voice recognition system, for example, is built on CMUSphinx, CMUCLTK and Phonetisaurus. “We saw Jasper as a great way to show developers what’s possible within the realm of open source,” he says. “We were amazed by how far we could get with these free, open solutions, and we wanted others to be similarly inspired.”
If you’re inspired, one option is to help the two expand the reach of Jasper. You can build your own with little more than a Raspberry Pi mini-computer, a speaker, and a microphone. Jasper’s source code has only been out for a day, but Marsh says they’ve already heard from several developers interested in building on top of it. “One individual mentioned that he was looking into powering his entire home with Jasper after wiring it with microphones,” he says. “Another asked us about automating vehicles in his factory with Jasper-powered voice control. Another even mentioned using Jasper in the classroom as a tool to teach kids about programming.”
For the time being, Marsh says, he and Saha have no plans to build a business around the tool. They simply wants others to join in. “When we were planning out the Jasper vision, what we really saw was a platform for hackers: its beauty lay in its extensibility,” he explains. “Nothing excited us more than to see what other programmers could do with the device.” Such is the beauty of open source.
Courtesy RichardHollingham, bbc.com
Courtesy Jack Clark,theRegister.co.uk
AMD has migrated terabytes of information from an Oracle Database installation to an Apache Hadoop stack, claiming Oracle’s pricey software was suffering from scaling issues.
The chip maker’s chief information officer, Jake Dominguez, revealed further details of the transfer in a chat with The Reg.
"Within the common Oracle platform we had we were struggling from a performance and reliability perspective," Dominguez told your correspondent in Atlanta just before the weekend. "One of the areas we were struggling with was in our test and assembly manufacturing – large, large datasets."
The migration of 276TB of data, which was completed last year, was prompted by "an environment outage that took weeks to recover," according to an internal document seen by El Reg. This encouraged AMD to replace Oracle for something else.
In the end, the processor giant settled on using Cloudera’s Hadoop distribution along additional open-source projects Apache Hive, ZooKeeper, HBase, HDFS, httpfs, LZO compression, MapReduce and others.
According to AMD, the Hadoop software has an unlimited row limit for query results compared to 100,000 rows on the chip giant’s Oracle setup, and "99 per cent of all queries execute in 15 minutes or less, with a median execution time of just 23 seconds."
What makes this shift so significant is that Oracle wants you to think AMD is the sort of company that will always use Oracle kit.
Oracle is grappling with a shift in the data warehouse and analytics market: its core business is being squeezed by free and open-source on-premises software, and its cloud wing is facing off with Amazon Web Services and the like.
Many organizations have sought to extricate themselves from Oracle’s grip, either by swapping out Oracle-owned open-source tech for other software, as Google did with a vast MySQL to MariaDB migration, or by shifting away from the company’s proprietary databases to open-source ones, as the UK’s National Health Service did with a major Riak migration.
One of the main open-source technologies commonly being deployed to supplement or replace Oracle is Hadoop, a data storage and processing framework that was first developed at Yahoo! in 2005 by engineers attempting to replicate some advanced technologies invented at Google.
Today, software like Hadoop, and other distributed data storage and management frameworks like Cassandra and Riak, are competing with software from IBM, SAP, and most prominently Oracle.
For AMD, a sophisticated multinational manufacturing company, to launch a major Oracle migration project is representative of a broader shift in IT which benefits low-cost or free software at the expense of incumbents like Oracle.
"We made the pivot to Hadoop [and] it not only increased our reliability but [improved] our response time," Dominguez told us. "It’s going to be an integral part of our enterprise data warehouse concept." ®
Courtesy David M. Denton, Commentary, InformationWeek.com
Technology isn’t enough to improve healthcare. Doctors must be able to distinguish between valuable data and information overload.
As a doctor, I know the value of information, but I also know what’s worse than not enough information: misinformation or too much information. In this information age, we seem to have plenty of both.
No matter what you think or believe, you can find proof of it on the Internet. You can also find a million and one ways to decorate your living room, making it overwhelmingly impossible to decide which ideas to use. The Internet is great at quenching our attention deficits by providing novelty at every click. Indeed, we can spend hours reading, watching, listening, or commenting without accomplishing anything at all. On the other hand, we get access to excellent resources and minds, beyond what was possible in a non-connected world.
Modern medicine also struggles with managing information. In our lust for data, we have created systems that store every keystroke, scan, or import, in a limitless cloud. Discrimination is no longer necessary. The pertinent and the frivolous are stored side by side. We no longer have data; we have “big data.” This allows the detection of trends and patterns that could never be identified with our smaller data sets. We are just beginning to understand its power.
Interestingly, however, while computers are great at sorting through data quickly and efficiently, humans aren’t. In fact, “more,” often clogs our ability to discern and decide. Additionally, computers can’t distinguish good data from bad data. At present, humans are still required to use the data to make decisions and care for patients. Until we have computers that can form therapeutic alliances, be compassionate, diagnose conditions, and provide and coordinate reasonable treatments, we are still dependent on fallible biologic beings to provide our medical care.
One of the hopes of electronic health records (EHRs) is that they will revolutionize medicine by collecting information that can be used to improve how we provide care. Getting good data from EHRs can occur if good data is input. This doesn’t always happen. To see patients; document encounters; enter smoking status; create coded problems lists; update medication lists; e-prescribe medications; order tests; find, open, and review multiple prior notes; schedule follow-up appointments; search for SNOWMED codes, search for ICD-9 codes, and find CPT codes to bill encounters (tasks previously delegated to a number of people); and compassionately interact with patients, providers have to take shortcuts.