.net programming, computers and assorted technology rants

Posts tagged “Google

James Bonds Got Nothing On Google

Courtesy Olivia Solon, wired.co.uk

contact

Earlier this year, Wired.co.uk wrote about Google’s invention of a smart contact lens that could monitor blood glucose levels through tear fluid. Now, the tech giant has invented another pair of lenses with an in-built camera.

The lenses were developed in the Google X laband were featured in a patent filing dating from 2012, which was recently published by the US Patent and Trademark Office. The patent filing features a contact lens that includes an embedded circuit, camera and sensor. The control circuit could be linked wirelessly or via a wire to the camera and sensor. The sensor could be a light sensor, pressure sensor, temperature sensor or electrical field sensor, which may allow for people to gain a "sixth sense" of sorts.

While the project might seem a bit "out there", the technology isn’t all that far off — smart contact lenses with displays have already been tested in labs, although they’ve been a little clunky up until now. One of the key benefits of having a camera embedded in a contact lens rather than attached to the side of the head like Google Glass is that the camera frame would follow a person’s precise gaze without obstructing their view (by being placed along the edge of the lens, away from the pupil).

In the patent filing — as described in great detail over at Patent Bolt — Google points out that the lens could take raw image from a contact lens, process it and relay what it sees to a blind wearer via a different sense — perhaps an audio warning that there is a car approaching a junction, for example. There may also be the option of go-go-gadget eyes that have a zoom capability.

If these contact lenses ever do come to market, it means you can leapfrog the Glasshole stage and go straight to Lenshole. Or whatever the neologism for that will be. In the meantime you can, for one day only, join the Glass Explorer programme today (15 April).

Advertisements

Google Releases .NET Framework API’s

Courtesy David Ramel, VisualStudioMagazine.com

The company announced general availability of theGoogle APIs Client Library for .NET version 1.8.1 in a blog post by Dan Ciruli of the Google Cloud Platform Team. "This library is an open source effort, hosted at NuGet, that lets developers building on the Microsoft .NET Framework to integrate their desktop or Windows Phone applications with Google’s services," he said.

He noted that the company tries to make its APIs accessible to developers working with any platform, from almost every language on nearly any hardware, with support for REST, HTTP and JSON. "However, to be truly useful on many platforms, it helps to have a client library–one that packs a lot of functionality like handling auth, streaming media uploads and downloads, and gives you native language idioms," he said.

That usefulness comes in the library’s integration with OAuth 2.0, the capability to stream media uploads and downloads, support of batching requests, and more. "Whether you are plugging Google Calendar into your .NET Framework-based application, translating text in a Windows Phone app or writing a PowerShell script to start Google Compute Engine instances, the Google APIs Client Library for .NET can save you tons of time," Ciruli said.

The library, hosted on NuGet, sports dozens of APIs, letting developers work with AdSense, Blogger, Cloud SQL database management, YouTube features and more. The new .NET library joins other client libraries for Java, JavaScript, Objective-C, PHP (in beta) and Python. Early-stage work is being done on libraries for Go, Node.js, Ruby and the Google Web Toolkit.

A Getting Started page and APIs Explorer are available for developers who want to dive into the new library.

Google said no major changes have been made from the release candidate version, but the documentation has been expanded.

Just one day after the .NET library announcement, Google on Tuesday announced a project to extend Android to the new breed of wearable computing devices, called Android Wear. The company is first focusing on computerized watches, which will provide all kinds of information at a glance, monitor your health and fitness, get answers to spoken questions as with Apple’s Siri and control other devices, among other capabilities. You can now sign up to gain access to a developer preview, intended only for development and testing, while an Android Wear SDK is promised "in the coming months." The preview focuses on notification APIs to help developers enhance their app notifications to create useful UXes. To aid in the development testing, Google is providing a Design Principles for Android Wear page.

Also this week, Google announced a paper to provide information on working with various existing configuration management tools on its Google Compute Engine, a "virtual datacenter" provided via a host of virtual machines (VMs).

"Over the last decade, a vibrant ecosystem of open source tools has emerged to manage the complexity of large-scale compute deployments," said solutions architect Matt Bookman in a blog post. "These tools allow you to deploy changes more rapidly, recover faster from failures, and take unused resources out of service, enabling you to keep your services’ uptime high and operational costs low."

He noted that an existing Compute Engine API and gcutil command-line tool are available for resource management, but technical leads and others might find it useful to also work with tools designed for software management.

"Puppet, Chef, Salt and Ansible are configuration management tools that provide software and resource management," Bookman said. "They are open source and support Google Compute Engine. If your organization already uses one of these tools for managing other systems, we hope to help you get started using it with Google Compute Engine."

This getting-started guidance is available in the recent paper, "Compute Engine Management with Puppet, Chef, Salt, and Ansible." It discusses working with the Puppet, Chef, Salt andAnsible configuration management tools.


Google making the Web faster with protocol that reduces round trips

Courtesy John Brodkin, ArsTechnica

Can Google’s QUIC be faster than Mega Man’s nemesis, Quick Man?

Josh Miller

Google, as is its wont, is always trying to make the World Wide Web go faster. To that end, Google in 2009 unveiled SPDY, a networking protocol that reduces latency and is now being built into HTTP 2.0. SPDY is now supported by Chrome, Firefox, Opera, and the upcoming Internet Explorer 11.

But SPDY isn’t enough. Yesterday, Google released a boatload of information about its next protocol, one that could reshape how the Web routes traffic. QUIC—standing for Quick UDP Internet Connections—was created to reduce the number of round trips data makes as it traverses the Internet in order to load stuff into your browser.

Although it is still in its early stages, Google is going to start testing the protocol on a "small percentage" of Chrome users who use the development or canary versions of the browser—the experimental versions that often contain features not stable enough for everyone. QUIC has been built into these test versions of Chrome and into Google’s servers. The client and serverimplementations are open source, just as Chromium is.

"Users shouldn’t notice any difference—except hopefully a faster load time," Google’s Jim Roskindwrote in a blog post.

Roskind apparently goes by the title of "RTT Reduction Ranger" at Google, referring to "round trip time." Roskind wrote that round trip time, "which is ultimately bounded by the speed of light—is not decreasing, and will remain high on mobile networks for the foreseeable future." QUIC, he writes, "runs a stream multiplexing protocol over a new flavor of Transport Layer Security (TLS) on top of UDP instead of TCP. QUIC combines a carefully selected collection of techniques to reduce the number of round trips we need as we surf the Internet."

An FAQ and an in-depth design document provide more information than most people would want to know about QUIC. Besides running multiplexed connections over UDP, QUIC was "designed to provide security protection equivalent to TLS/SSL, along with reduced connection and transport latency," the FAQ states.

"QUIC will employ bandwidth estimation in each direction into congestion avoidance, and then pace packet transmissions evenly to reduce packet loss," Google says. "It will also use packet-level error correction codes to reduce the need to retransmit lost packet data. QUIC aligns cryptographic block boundaries with packet boundaries so that packet loss impact is further contained."

Google had to design QUIC carefully to avoid it becoming a nice theoretical system with no applicability to the real world. That’s why Google is using UDP instead of building a protocol made entirely of new technologies. "Middle boxes on the Internet today will generally block traffic unless it is TCP or UDP traffic," Google said. "Since we couldn’t significantly modify TCP, we had to use UDP. UDP is used today by many game systems, as well as VoIP and streaming video, so its use seems plausible."

Ultimately, Google’s goal is not necessarily to replace the Web’s current protocols but to bring improvements to how TCP is used with SPDY. SPDY already provides multiplexed connections over SSL, but it runs across TCP, causing some latency issues.

Whereas TCP uses a three-step process (or a "handshake") to negotiate connections between Web users and servers, UDP is handshake-less. UDP sends packets out the door without any error checking, improving speed while reducing reliability. QUIC attempts to provide the speed advantages of UDP while making data delivery more reliable.

From Google’s QUIC FAQ:

Why can’t you just evolve and improve TCP under SPDY? That is our goal. TCP support is built into the kernel of operating systems. Considering how slowly users around the world upgrade their OS, it is unlikely to see significant adoption of client-side TCP changes in less than 5-15 years. QUIC allows us to test and experiment with new ideas, and to get results sooner. We are hopeful that QUIC features will migrate into TCP and TLS if they prove effective.

A major problem with SPDY over TCP today is that "[a] single lost packet in an underlying TCP connection stalls all of the multiplexed SPDY streams over that connection," Google said. "With UDP, QUIC can support out-of-order delivery, so that a lost packet will typically impact (stall) at most one stream."

TCP and TLS/SSL also typically "require one or more round trip times (RTTs) during connection establishment," Google said. "We’re hopeful that QUIC can commonly reduce connection costs toward zero RTTs. (i.e., send hello, and then send data request without waiting)."

Google doesn’t know just how much faster QUIC will make Web surfing, because in-house tests often differ significantly from real-world network conditions. That’s why testing with actual Web users is crucial. The question of how much QUIC is able to reduce latency in the real World Wide Web is what "we are investigating at the moment, and why we are experimenting with various features and techniques in Chromium," Google said. "It is too early to share any preliminary results—stay tuned."


Pic of the Day: Big Brother is Tracking You

Courtesy VentureBeat.com

tech company privacy infographic

Read More:

http://venturebeat.com/2013/06/25/every-day-tracking/


Google’s Mobile Backend Starter

Courtesy MELLISA TOLENTINO, SiliconAngle

Google has released a new tool that would allow developers to reap the benefits of using a cloud backend for their apps without having to deal with the headaches of running their own servers, and it’s being leveraged particularly for mobile developers.

The Google Mobile Backend Starter is a one-click deployable, complete mobile backend that provides ready-to-deploy, general purpose cloud backend and a general purpose client-side framework for Android.

“The new tool essentially provides developers everything they need for their apps with a robust enough system to handle virtually any load a mobile app can throw at it.  To get started, developers simply have to select the Mobile Backend sample app when they start a new app engine project,” Kristin Feledystated on her NewsDesk segment.

It gives developers everything they need to set up a backend for apps without having to write backend codes, servers that would store their data with App Engine, a client library and sample app for Android that make it easy to access that data, and developers can add support for Google Cloud Messaging (GCM) and continuous queries that notify your app of events developers are interested in.  And since everything must be secured, Google has thrown in built-in support for Google Authentication for Mobile Backend Starter.

Mobile Backend Starter Features

Cloud data storage that lets developers store any amount of data that can be accessed anywhere.

Pub/Sub messaging that lets developers send messages from one device, to any or all other devices being used and useful for various applications including social apps, forums, chat, gaming, and group collaborations.

Push notifications which updates data on all available devices.

Continuous queries to create queries that run continuously on the server and automatically feeds updates to  clients and is powered by Prospective Search.

Google authentication and authorization to keep data isolated per user or shared among users.

Free to get started, scales effortlessly with your needs which allows developers to handle hundreds of users for free and then eventually grow to any scale.

How to set it up

Go to http://cloud.google.com/console, create a project, then click deploy.

Click on settings to go to the admin panel for your new backend.  Under “Authentication / Authoirzation” select “Open (for development use only)” and save the changes.

Download the Android client project and open it up in your Android IDE. Locate the Consts.java file and set the PROJECT_ID to the Project ID you created in the Google Cloud Console.

The next step is the fun part as it entails developers to just enjoy building and running the Android application as you have a cloud enabled Android application for backup.  No more worrying.


Google Upgrading to 2048 bit digital certificates

Courtesy Dan Goodin, Ars Technica

Google is upgrading the digital certificates used to secure its Gmail, Calendar, and Web search services. Beginning on August 1, the company will start upgrading the RSA keys used to encrypt Web traffic and authenticate to 2048-bits, twice as many as are used now.

The rollout affects the transport layer security (TLS) certificates that underpin HTTPS connections to Google properties. Sometimes involving the secure sockets layer (SSL) protocol, the technologies prevent attackers from reading the contents of traffic passing between end users and Google. They also provide a cryptographic assurance that servers claiming to be Google.com are in fact operated by Google, as opposed to being clones created by attackers exploiting age-old weaknesses in the way the Internet routes traffic.

There are good reasons for Google to upgrade the strength of these crucial digital keys. The weaker the key strength of an RSA key pair, the easier it is for anyone to mathematically derive the "private key." Such attacks work by taking the certificate’s "public key" that’s published on the website and factoring it to derive the two prime numbers that make up the private key. Once the private key for a Google certificate has been factored, the attacker can impersonate an HTTPS-protected Google server and provide the same indications of cryptographic security as the legitimate service. Someone who was able to derive the secret primes to Google’s private key, for instance, would be able to create convincing attacks that would fool many browsers and e-mail clients.

The factors in private keys are extremely time-consuming to find, but increases in computing power are making the task gradually easier. In 2009, researchers were able to factor a 768-bit RSA key,according to Wikipedia. The online encyclopedia went on to say that a 1024-bit key has not yet been factored. While it may take years for that to happen, it’s only a matter of time until it is. And of course, secretive agencies within powerful nation states may already have the ability to factor larger bit sizes.

Read More:

http://arstechnica.com/security/2013/05/google-builds-bigger-crypto-keys-to-make-site-forgeries-harder/


Google Releases Realtime API For Drive Apps

Courtesy Information Week

Google has released a new application programming interface (API) that allows developers to implement real-time collaboration in Google Drive apps.

Users of Google Docs, as well as Spreadsheets and Slides, now have the ability to edit a document at the same time others are doing so, and each can see the changes input by collaborators in real time. This is made possible by a technology called operational transformation, also featured in the now-discontinued Google Wave, which ensures the rapid transference of changes over a network.

Now developers who create apps that rely on Google Drive for storage can provide their users with the ability to interact and work together in real time.

“With the new Google Drive Realtime API, you can now easily add some of the same real-time collaboration that powers Google Drive to your own apps,” explained Brian Cairns, a software engineer at Google, in a blog post. “This new API handles network communication, storage, presence, conflict resolution and other collaborative details so you can focus on building great apps.”

[ How should Apple compete against rivals like Google and Samsung? Read Is Apple Losing War Of Words? ]

The makers of three apps have already integrated the Google Drive Realtime API into their code.

One is Neutron Drive, an online code editor. Using Google’s Realtime API, Neutron Drive allows multiple programmers to make changes to the same file at the same time. Version control systems like Git allow a similar sort of collaboration, but not in real time — changes to code stored in a Git repository must be merged, which may create conflicting versions of a file if the same lines of the program were revised by different collaborators. These conflicts can be reconciled, but real-time collaboration provides a way to avoid conflicts on the fly.

Paul Bailey, the developer who created Neutron Drive, said in an email that he found the API to be extremely useful because it makes adding real-time features so easy. “I think you’ll see a new wave of apps that will use this technology,” he said. “Before this API, I struggled with how to implement real-time features into Neutron Drive and now Google has made this easy and scalable — two of the best things a developer likes to hear.”

Bailey acknowledged that not everyone needs real-time collaboration capabilities. “A lot of developers are lone rangers who code by themselves,” he said. “So for them, it probably won’t make much of a difference. However, others like to pair program or may need help from a friend.”

He also said he expects real-time collaboration will be useful in apps for students and teachers.

The two other apps that have been updated to utilize the Drive Realtime API are Gantter, a free online project-scheduling tool and diagram editor, and draw.io, a diagramming application.

In addition, Google has created a collaborative colored cube puzzle — a Rubik’s Cube for those not concerned about trademark lawsuits — to demonstrate how frustrating it can be to have multiple people all trying to solve the same puzzle.

Those committed to investigating the technology further can stop by the Drive Realtime API Playground and the Google Drive Realtime API technical documentation.