AMD has migrated terabytes of information from an Oracle Database installation to an Apache Hadoop stack, claiming Oracle’s pricey software was suffering from scaling issues.
The chip maker’s chief information officer, Jake Dominguez, revealed further details of the transfer in a chat with The Reg.
"Within the common Oracle platform we had we were struggling from a performance and reliability perspective," Dominguez told your correspondent in Atlanta just before the weekend. "One of the areas we were struggling with was in our test and assembly manufacturing – large, large datasets."
The migration of 276TB of data, which was completed last year, was prompted by "an environment outage that took weeks to recover," according to an internal document seen by El Reg. This encouraged AMD to replace Oracle for something else.
In the end, the processor giant settled on using Cloudera’s Hadoop distribution along additional open-source projects Apache Hive, ZooKeeper, HBase, HDFS, httpfs, LZO compression, MapReduce and others.
According to AMD, the Hadoop software has an unlimited row limit for query results compared to 100,000 rows on the chip giant’s Oracle setup, and "99 per cent of all queries execute in 15 minutes or less, with a median execution time of just 23 seconds."
What makes this shift so significant is that Oracle wants you to think AMD is the sort of company that will always use Oracle kit.
Oracle is grappling with a shift in the data warehouse and analytics market: its core business is being squeezed by free and open-source on-premises software, and its cloud wing is facing off with Amazon Web Services and the like.
Many organizations have sought to extricate themselves from Oracle’s grip, either by swapping out Oracle-owned open-source tech for other software, as Google did with a vast MySQL to MariaDB migration, or by shifting away from the company’s proprietary databases to open-source ones, as the UK’s National Health Service did with a major Riak migration.
One of the main open-source technologies commonly being deployed to supplement or replace Oracle is Hadoop, a data storage and processing framework that was first developed at Yahoo! in 2005 by engineers attempting to replicate some advanced technologies invented at Google.
Today, software like Hadoop, and other distributed data storage and management frameworks like Cassandra and Riak, are competing with software from IBM, SAP, and most prominently Oracle.
For AMD, a sophisticated multinational manufacturing company, to launch a major Oracle migration project is representative of a broader shift in IT which benefits low-cost or free software at the expense of incumbents like Oracle.
"We made the pivot to Hadoop [and] it not only increased our reliability but [improved] our response time," Dominguez told us. "It’s going to be an integral part of our enterprise data warehouse concept." ®
An example of the nanostructures created with 3D laser lithography (Source: Karlsruhe Institute of Technology)
Computerworld – A team of German scientists used 3D laser lithography to print microscopic trusses and shells that are as strong as steel — and lighter than water.
The objective of the research is to someday create materials stronger than anything yet produced, yet lightweight enough for use in products such as aircraft or armor.
The honeycomb-like structures, made of ceramic-polymer composite material, are only about 50 nanometers thick. A nanometer is one billionth of a meter.
To give you an idea of how small the microarchitectures are, consider that a strand of human DNA is 2.5 nanometers in diameter and a human hair is about 80,000 to 100,000 nanometers wide.
The German team recently published a paper on their research.
The "polymer composites … exceed the strength-to-weight ratio of all engineering materials, with a density below 1,000 kilograms per meter (kg/m)," said Jens Bauer, a materials scientist leading the research at the Karlsruhe Institute of Technology in Germany, in the paper.
The Nanoscribe Photonic 3D Professional printer from Nanoscribe GmbH, is a table-top laser lithography system used for creating three-dimensional photonic structures (Source: Nanoscribe)
The scientists took their inspiration from nature, which has produced materials far stronger, yet less dense than those created in a lab. For example, natural cellular materials such as bone and wood are strong and yet have considerably lower densities than aluminum alloys, Bauer stated.
The team used a 3D printer from Nanoscribe GmbH to create the nanostructures they hope can someday enable the creation of super-strong materials.
"Applying 3D laser lithography, which allows for producing almost arbitrary structures with sub-micron resolving power, micro-truss and -shell structures may be manufactured," he stated in a recently published paper. "Ratios comparable to those of advanced metallic alloys or technical ceramics have been obtained."
The nanostructures are created by placing a small amount of photocurable resin on a glass slide. Then a stereolithography 3D printer projects a laser in a grid-like pattern on the liquid material, hardening it where the light strikes. The resulting hardened structure is then coated with alumina, or aluminium oxide.
Nanoscribe’s stereolithography 3D printers are unable to as yet create structures larger than micrometers in size.
European car safety researchers have developed a camera-based system that watches your facial expressions while you drive, and then uses highly accurate emotion detection algorithms to work out when you’re suffering from road rage. The idea behind this system is that, when you’re irritated or angry, you become a more aggressive driver and less attentive — leading to more accidents. The same technology can also be used to measure tiredness and fatigue, by measuring the percentage of your eyelid closure, and then warning you to take a break before you fall asleep at the wheel.
This work, developed by EPFL’s Signal Processing 5 Laboratory (LTS5) in association with PSA Peugeot Citroen, uses an infrared camera placed behind the car’s steering wheel to track the seven universal hard-coded emotions that your face can show. Fear, anger, joy, sadness, disgust, surprise, and suspicion are so intrinsic to human nature that they have very specific muscle movements — movements that can be fairly easily picked up with an infrared camera and some computer vision software. As you can see in the video below, the software tracks your eyes, mouth, and nose, and from their movements it can work out what emotion you’re currently experiencing.
If your face registers “anger” or “disgust” for long enough, the software decides that you are stressed out and probably about to do something stupid. Because this is a prototype, all the EPFL system does is tell you when you’re suffering from road rage — but presumably a production version of the technology would do a lot more. Maybe it would pre-charge your breaks, ready for when you tailgate the guy in front of you? Or maybe, if your car has someautonomous driving features, they could quietly take over — so you think you’re still driving, but it’s actually your car that’s preventing you from swerving out of your lane or piling into the car in front of you. Maybe such a system could disable your car’s horn, too…
Moving forward, LTS5 hopes it can use its computer vision to detect other states, such as distraction, and to read your lips, which could considerably help with in-car voice recognition. It’s also worth pointing out that similar systems are already in production vehicles — ExtremeTech’s car of the year, the Mercedes-Benz S-Class, uses the steering wheel to detect when you’re drowsy. These systems don’t have quite the same range of emotion detection as camera-based solutions — but really, the ultimate system would combine both steering wheel and computer vision technologies, and also use sensors in your seat, noise sensors in the cockpit (noisy kids), and other clever techniques of assessing your road worthiness.
One day, when autonomous vehicles are the norm, you’ll be able to get into your car — and then it will automatically detect that you’re completely wasted, and then drive you home using a route that minimizes the risk of throwing up. One day.
During a keynote address at Mobile World Congress 2014, IBM CEO Ginni Rometty announced the IBM Watson Mobile Developer Challenge, a global competition to promote the development of mobile consumer and business apps powered by Watson.
During the next three months, IBM is calling on software developers who are willing to develop and bring to market a commercial application that leverages Watson capabilities, such as the ability to answer complex questions posed in natural language with speed, accuracy, and confidence. Three winners will receive IBM support to further develop their apps and bring them to market.
Rometty explained that Watson, first developed by IBM researchers to show what was possible in combining cognitive computing and natural language processing, has become far more than the novelty and headline-grabbing artificially intelligent computer system that competed against Jeopardy champions on TV a few years ago.
Since then, the company has created a Watson division, and IBM has been pouring more money into the developments to commercialize the technology. But in addition to continuing its own research and commercializing elements of Watson, IBM is also reaching out to a broader ecosystem of customers, partners, and developers to come up with their own creative applications for Watson.
IBM’s Watson during its 2011 appearance on Jeopardy.
(Credit: Screenshot by Marguerite Reardon/CNET)
The technology is already being used in several industries, including banking, health care, and retail. For instance, at Memorial Sloan-Kettering Cancer Center in New York City, oncologists are using the technology to help diagnose and treat cancer patients. Using the Watson “cloud” service, the doctors feed Watson data on clinical trials; information regarding treatments; and personal statistics on patients, which the cognitive computing engine uses to provide feedback on treatments. IBM showed a video in which a doctor at Sloan-Kettering asked Watson for a revised course of action for treatment of a patient, speaking in natural language to make the request. And then Watson answered with options for an individualized treatment plan.
Watson isn’t replacing the need for a doctor, the oncologist in the video pointed out. Instead it presents more options to help the real doctors make more-informed decisions.
In an onstage interview with tech journalist David Kirkpatrick, Rometty talked about how Watson is being used in retail. She described how the outdoor clothing company The North Face is using it to help customers buy equipment and apparel.
She demonstrated the service by telling Watson about a trip she planned to Patagonia. Watson answered with recommendations for the type of clothing she needed and the backpack she should use. It also told her to get an ABS. She said she wasn’t sure what that was and looked on a typical search engine for an explanation. She said the Web search request brought back dozens of explanations about antilock braking systems on cars.
Clearly this was not the ABS that the North Face Watson application was recommending. She asked Watson what ABS was. And she was told in plain spoken language that it is a special emergency airbag system used by hikers and skiers during an avalanche.
“Watson knew I wasn’t asking about antilock brakes,” she said. She explained that the service was intelligent enough to put her request in the context of her discussion regarding what to bring on a trip to Patagonia.
“It had to know where Patagonia was, what the climate is like, and that I might encounter an avalanche,” she said.
With the new developer challenge, Rometty said, IBM wants to bring Watson to the mobile industry to see what types of applications mobile developers will come up with to leverage the intelligence service.
While other technology companies, such as Apple, have tried to offer a similar voice-activated intelligent system for mobile phones, those systems haven’t even come close to the cognitive ability Watson has achieved. Initially, Watson’s technology was too big to cram into a mobile device. When it first appeared in 2011 on the Jeopardy TV show, the system of servers took up an entire room. But IBM has worked aggressively to shrink the technology, and now it can be delivered as a cloud-based service, Rometty said.
Of course, IBM and Apple aren’t the only companies working on artificial intelligence technology that uses natural language as an input. Google recently bought London-based artificial intelligence company DeepMind for $500 million. And other tech giants, such as Facebook and Yahoo, are making forays into the world of artificial intelligence.
Still, Rometty thinks IBM has a leg up compared with the rest of the industry.
“Every major invention in data and analytics has come from IBM,” she said.
E-mail is old and complex. It’s the oldest still-recognizable component of the Internet, with its modern incarnation having coalesced out of several different decades-old messaging technologies including ARPANET node-to-node messaging in the early 1970s. And though it remains a cornerstone of the Internet—the original killer app, really—it’s also extraordinarily hard to do right.
We most often interact with e-mail servers through friendly Web-based front-ends or applications, but a tremendous amount of work goes into hiding the complexity that allows the whole system to work. E-mail functions in a poisoned and hostile environment, flooded by viruses and spam. The seemingly simple exchange of text-based messages operates under complex rules with complex tools, all necessary to keep the poison out and the system functioning and useful in spite of the abuse it’s constantly under.
From a normal person’s perspective, e-mail seems like a solved problem: sign up for Internet access and your ISP gives you an e-mail address. Google, Apple, Yahoo, or any number of other free e-mail providers will hook you up with e-mail accounts with gigabytes of space and plenty of cool value-added features. Why do battle with arcane dragons to roll your own e-mail solution?
I’ll tell you why: because if it’s in the cloud, it’s not yours.
Because you must rely on others for your security. You have no control over who can read your correspondence—you must allow your data to be mined and your marketing profile extracted. You won’t be told if your metadata is collected or if your inbox is vacuumed up by a secret government request. You consent to be not a customer but a product, and a product has no rights.
Well, to hell with that. It’s your e-mail. And we’re going to take it back.
This is hard and even a bit scary…
E-mail is hard. If you want an easier sysadmin project, go set up a Web server. E-mail is a lot more complex, with many more moving parts. On the other hand, your correspondence with others is one of the most personal aspects of your online life—in a medium ultimately made of text, your words are you. It’s worth learning how to claw your online life back from those who would data mine and monetize it.
There are pitfalls and caveats—the biggest of which is that if you run your own e-mail server, you will be the sysadmin. The upside of this is that no bored or tired customer service rep about to go off-shift is going to fall for a social engineering attack and reset your e-mail password. The downside is that you are responsible for the care and feeding of your system. This is not an impossible task—it’s not even really difficult—but it is non-trivial and never-ending. Applying critical updates is your responsibility. When do critical updates come out? That’s your responsibility to keep track of, too.
Worst of all, if you screw up and your server is compromised or used as spam relay, your domain will almost certainly wind up on blacklists. Your ability to send and receive e-mail will be diminished or perhaps even eliminated altogether. And totally scrubbing yourself from the multitude of e-mail blacklists is about as difficult as trying to get off of the TSA’s No Fly list.
You have been warned.
…but it’s also worth doing
OK, that ought to be enough to scare away the people who aren’t serious. For those of you still with me: this is going to be a hell of a lot of fun, and you’re going to learn a lot.
This is going to be multi-part series, and here in this first part we’re going to ask (and answer) a bunch of questions about how we’re going to set our e-mail server up. We’ll also outline the applications we’re going to use and talk about what they do. We expect this series will run over the course of the next few weeks; unlike our series on setting up a Web server, though, you won’t be able to get started firing off e-mails after part 1—you need the whole thing in order for it all to work right.
This certainly isn’t the only DIY e-mail tutorial on the Web. If you’re eager to skip ahead and get started now, we suggest consulting Christoph Hass’ excellent tutorial on Workaround.org—he makes many (but nowhere near all) of the same configuration choices that we will be making. However, Ars wouldn’t be putting this guide together if we didn’t have a few tricks up our sleeves—we’ve been in an e-mail configuration cave for the past month, and we have a lot of good information to share.
We all know that wearable computers will need to be small and Intel has obliged by unveiling Edison, a new computer housed inside and SD card that has built-in Wi-Fi connectivity and that can support multiple operating systems. Intel CEO Brian Krzanich said that the goal of Edison is to give wearable computing manufacturers a go-to option when they need a computer to power their devices.
“Wearables are not everywhere today because they aren’t yet solving real problems and they aren’t yet integrated with our lifestyles,” he explained. “We’re focused on addressing this engineering innovation challenge. Our goal is, if something computes and connects, it does it best with Intel inside.”Intel’s press release follows below.
Scientists, engineers and policymakers are all figuring out ways drones can be used better and more smartly, more precise and less damaging to civilians, with longer range and better staying power. One method under development is by increasing autonomy on the drone itself.
Eventually, drones may have the technical ability to make even lethal decisions autonomously: to respond to a programmed set of inputs, select a target and fire their weapons without a human reviewing or checking the result. Yet the idea of the U.S. military deploying a lethal autonomous robot, or LAR, is sparking controversy. Though autonomy might address some of the current downsides of how drones are used, they introduce new downsides policymakers are only just learning to grapple with.
The basic conceit behind a LAR is that it can outperform and outthink a human operator. "If a drone’s system is sophisticated enough, it could be less emotional, more selective and able to provide force in a way that achieves a tactical objective with the least harm," said Purdue University Professor Samuel Liles. "A lethal autonomous robot can aim better, target better, select better, and in general be a better asset with the linked ISR [intelligence, surveillance, and reconnaissance] packages it can run."