.net programming, computers and assorted technology rants

Archive for June, 2013

Developers Mostly Positive about Build

Courtesy John K. Waters, VisualStudioMagazine

Microsoft’s annual Build conference drew about 6,000 attendees to San Francisco this week, and an estimated 60,000 caught the keynotes online. The Redmond software maker officially released the preview edition of Windows 8.1 at the show (complete with resurrected Start button), unveiled new Azure cloud services focused on mobile and web development, and pitched the Bing search engine as a development platform.

Microsoft CEO Steve Ballmer trotted out a truly dizzying array of Windows devices (an "explosion of new devices," he said), including small tablets, that he said were flying off the shelves. ("The small form factor is very important," he said.) He also showed off new Facebook, Flipbook, and NFL apps for 8.1, and beat the drum of "touch, touch, touch." He also pointed out proudly that it’s been only eight months since the last Build conference — evidence, he said, of the new rapid release cadence of Microsoft products.

The company also promoted a public preview of Visual Studio 2013 and .NET Framework 4.5.1, which impressed conference attendee Ryan Balsick, manager of a small development group in Nashville-based ICA. Balsick’s company is a .NET shop that builds a suite of products that facilitate health information exchange.

"Microsoft always has great developer tools," Balsick said, "but the company’s biggest challenge is to get people to use Windows 8. If you’re an app developer, like we are, that’s critical. It’s only if a lot of people are using that platform that we get to start developing applications that take advantage of cool things like touch."

Microsoft’s announcement that it was opening up the Bing search engine as an app development platform could prove to be a game changer, Balsick said. Microsoft launched adeveloper portal this week stocked with a collection of APIs.

Gurdeep Singh Pall, VP of Microsoft’s Bing group, made the Bing pitch during his portion of the keynote. "Bing is a great search tool," he said, "but it’s actually very valuable outside of the search box as well. For a long time now, we’ve thought that you could use these capabilities to create some great experiences."

"Making it possible for developers to tap that and create seamless applications that tie into search: that’s huge, too," Balsick said.

Steve Testa, an application developer at Cleveland, OH-based Hyland Software, was also impress by the Bing-as-a-dev-platform strategy. "I think it’ll be amazing to see what comes out of that in the next year," he said.

Testa’s co-worker at Hyland, Chuck Camps, agreed. He also felt that the unified Windows platform vision seemed to be coming together. And he credited Microsoft for its attention to its developer community.

"Microsoft doesn’t always get it right," he said. "But one thing they do get right is the developer experience," he said. "They nail it, year after year."

Zack Williamson, an independent contractor from Tampa, FL, who works mostly with servers and clients, was impressed by the new multiple-monitor support coming in Windows 8.1 and "enhancements to the Start screen experience." He added that he’s "all-in" when it comes to the new OS.

"I’ve been pushing for it in my environments," he said, "trying to get people to migrate in that direction. It’s the future. It’s where we’re going, and it’s not actually a particularly painful transition."

But it was the Windows Azure integration with Visual Studio 2013 that interested Williamson most as a developer. VS now connects with Azure Mobile Services, allowing developers to synchronize over multiple devices.

"To be able to update and edit your procedures in Azure right from the IDE instead of having to go off into Azure management is going to be a big, big deal for a lot of developers," Williamson said.

One common complaint among attendees was a lack of detailed announcements around Windows Phone. Ballmer spoke briefly about the platform, and showed off a several Windows phones. It was also announced that Sprint will be adding the HTC 8XT and the Samsung ATIV S Neo Windows phones to their device lineups — good news for Microsoft, which has yet to make much of a dent that market.

Beau Mersereau, who leads the development team at the law firm of Fish and Richardson, felt that Windows 8 had created a true inflection point that might give a lot of people pause, but not one that would cause his firm to leave the Windows platform.

"We’re a law firm and documents are what we do," he said, "so for us, it’s about Office. We have a tight integration with Office now and we’re going to be rolling out Office 2013 in the fall. We’re on Windows 7 and Office 2010, so the question is really, do we stay with Windows 7 or move to Windows 8?"

IDC analyst Al Hilwa had this take on the conference in an email:

"When you look at the body of changes that is 8.1," he wrote, "you can’t help but be startled by what Microsoft has accomplished in 8 months. In addition to the long list of features, the app store re-design and the enterprise integration enablement, I have to add the retail work with Best Buy, the new device sizes, and the fact that [the upgrade] is free. All this could amount to a game changer for this platform. To be sure Microsoft’s work is not over, as there is much more alignment between Windows Phone and Xbox ecosystems still to be done, but on both the PC and tablet front, 8.1 looks like a release that will see a significant increase in adoption."

Not surprisingly, I didn’t hear a single complaint about Microsoft’s decision to hand out free Acer Iconia W3 Windows 8 tablets and Microsoft Surface Pros to attendees at this year’s show.


SQL Server 2014 Preview Available

Courtesy David Ramel, VisualStudioMagazine

The latest version of Microsoft’s flagship Relational Database Management System (RDBMS) is offered in two versions: the regular SQL Server 2014 Community Technology Preview 1 and the cloud-based SQL Server 2014 Community Technology Preview 1 on Windows Azure, both from the TechNet Evaluation Center. The announcement comes one day before the BUILD 2013 developer’s conference in San Francisco.

The Windows Azure cloud was first and foremost in Microsoft’s messaging about the new software, touting the company’s "Cloud OS." "Microsoft has made a big bet on what we call our cloud-first design principles," said Brad Anderson, corporate VP, in a blog post discussing the new previews.

"SQL Server 2014 features in-memory processing for applications ("Hekaton"), as well as data warehousing and business intelligence," Anderson said. "SQL Server 2014 also enables new hybrid scenarios like AlwaysOn availability, cloud backup and disaster recovery. It lives in Windows Azure and can be easily migrated to the cloud from on-premises."

Along with SQL Server 2014, Microsoft announced the availability of previews for Windows Server andSystem Center, both as 2012 R2 versions.

The SQL Server 2014 CTP will expire after 180 days or on Dec. 31, 2013, whichever comes first. Download options include an ISO DVD image, CAB file or Azure version. Microsoft recommends the ISO or CAB version to test the software’s new in-memory capabilities.


SignalR Revisited

Courtesy Eric Vogel, VisualStudioMagazine

Back in January, I covered how to use the SignalR Persistent Connection API to create a chat application. Since then, SignalR has been promoted from release candidate 1 to release version 1.1.2. With the minor version change come many API changes that have broken my old code. Today I’ll cover how to implement the same chat application using the updated API.

To get started, create a new ASP .NET MVC 4 Internet Application within Visual Studio 2012. Then install the Microsoft ASP.NET SignalR NuGet package, as seen in Figure 1.

[Click on image for larger view.]Figure 1. Installing the SignalR NuGet package.

Next, create a Chat directory within the project, then a new class named ChatData. The ChatData class contains two string properties, named Name and Message:

namespace SignalRRevisited.Chat
{
    public class ChatData
    {
        public string Name { get; set; }
        public string Message { get; set; }

        public ChatData()
        {
        }

        public ChatData(string name, string message)
        {
            Name = name;
            Message = message;
        }
    }
}

Now it’s time to implement the ChatConnection class, which uses the updated SignalR persistent connection API. Start out by adding the following using statements to the ChatConnection class:

using System.Threading.Tasks;
using Microsoft.AspNet.SignalR;
using Newtonsoft.Json;

Next, subclass the PersistentConnection class:

public class ChatConnection : PersistentConnection

Then I add the _clients member variable that maps the user’s name to their connectionId:

private readonly Dictionary<string, string> _clients = new Dictionary<string, string>();

Next I override the OnConnected method, which used to be named OnConnectedAsync. In the method I add the user’s connectionId to the _clients Dictionary and broadcast a message notifying everyone that a new user has joined the chat room:

protected override Task OnConnected(IRequest request, string connectionId)
 {
     _clients.Add(connectionId, string.Empty);
     ChatData chatData = new ChatData("Server", "A new user has joined the room.");
     return Connection.Broadcast(chatData);
 }

Now, I override the OnReceived method, which used to be named OnReceivedAsync. In the method I deserialize the chat message sent from the client, set the user’s name in the _clients Dictionary, then broadcast the user’s message:

protected override Task OnReceived(IRequest request, string connectionId, string data)
{
    ChatData chatData = JsonConvert.DeserializeObject<ChatData>(data);
    _clients[connectionId] = chatData.Name;
    return Connection.Broadcast(chatData);
}

Next I override the OnDisconnected method, which used to be named OnDisconnectedAync. The method broadcasts that the user’s left, and removes their username from the _clients Dictionary. Here’s the ChatConnection class:

using System.Collections.Generic;
using System.Threading.Tasks;
using Microsoft.AspNet.SignalR;
using Newtonsoft.Json;

namespace SignalRRevisited.Chat
{
    public class ChatConnection : PersistentConnection
    {
        private readonly Dictionary<string, string> _clients = new Dictionary<string, string>();

        protected override Task OnConnected(IRequest request, string connectionId)
        {
            _clients.Add(connectionId, string.Empty);
            ChatData chatData = new ChatData("Server", "A new user has joined the room.");
            return Connection.Broadcast(chatData);
        }

        protected override Task OnReceived(IRequest request, string connectionId, string data)
        {
            ChatData chatData = JsonConvert.DeserializeObject<ChatData>(data);
            _clients[connectionId] = chatData.Name;
            return Connection.Broadcast(chatData);
        }

        protected override Task OnDisconnected(IRequest request, string connectionId)
        {
            string name = _clients[connectionId];
            ChatData chatData = new ChatData("Server", string.Format("{0} has left the room.", name));
            _clients.Remove(connectionId);
            return Connection.Broadcast(chatData);
        }
    }
}

Now it’s time to implement the client-side JavaScript for the chat application. Create a new JavaScript class file named ChatR.js within the Scripts folder. The client-side SignalR Persistent Connection API used for the chat application has not changed since the previous article. Copy the JavaScript below into ChatR.js (see the original article for more on the code).

$(function () {
    var myConnection = $.connection("/chat");

    myConnection.received(function (data) {
        $("#messages").append("<li>" + data.Name + ': ' + data.Message + "</li>");
    });

    myConnection.error(function (error) {
        console.warn(error);
    });

    myConnection.start()
        .promise()
        .done(function () {
            $("#send").click(function () {
                var myName = $("#Name").val();
                var myMessage = $("#Message").val();
                myConnection.send(JSON.stringify({ name: myName, message: myMessage }));
            })
        });
});

Now it’s time to implement the ChatR view. Create a new Razor view named ChatR within the Views\Home directory. The view for the chat application is very simple: it has a textbox for the user’s name, a textbox for a message and button to send the message. Below the send button the messages are appended to an unordered list element. In addition, I’ve moved the rendering of the jquerysignalr bundle to the view itself instead of the shared layout Razor view:

@{
    ViewBag.Title = "Chat";
}

<h2>Chat</h2>

@using (Html.BeginForm()) {
    @Html.EditorForModel();

<input id="send" value="send" type="button" />
<ul id="messages" style="list-style:none;"></ul>
}   
    
@section Scripts
{
     @Scripts.Render("~/bundles/jquerysignalr")
     <script src="~/Scripts/ChatR.js"></script> 
}

Next, I add a ChatR action to the HomeController class:

public ActionResult ChatR()
{
    var vm = new Chat.ChatData();
    return View(vm);
}

Here’s the completed HomeController class implementation:

using System.Web.Mvc;

namespace SignalRRevisited.Controllers
{
    public class HomeController : Controller
    {
        public ActionResult Index()
        {
            return View();
        }

        public ActionResult ChatR()
        {
            var vm = new Chat.ChatData();
            return View(vm);
        }
    }
}

The last two steps are to add the jquerysignalr bundle and the needed route for the ChatR controller action. Open up the BundleConfig class and add the jquerysignalr bundle to the bottom of the class:

bundles.Add(new ScriptBundle("~/bundles/jquerysignalr").Include(
        "~/Sciprts/json2.js",
        "~/Scripts/jquery-signalR-{version}.js"));

Your BundleConfig class should now look like this:

using System.Web.Optimization;

namespace SignalRRevisited
{
    public class BundleConfig
    {
        // For more information on Bundling, visit http://go.microsoft.com/fwlink/?LinkId=254725
        public static void RegisterBundles(BundleCollection bundles)
        {
            bundles.Add(new ScriptBundle("~/bundles/jquery").Include(
                        "~/Scripts/jquery-{version}.js"));

            bundles.Add(new ScriptBundle("~/bundles/jqueryui").Include(
                        "~/Scripts/jquery-ui-{version}.js"));

            bundles.Add(new ScriptBundle("~/bundles/jqueryval").Include(
                        "~/Scripts/jquery.unobtrusive*",
                        "~/Scripts/jquery.validate*"));

            // Use the development version of Modernizr to develop with and learn from. Then, when you're
            // ready for production, use the build tool at http://modernizr.com to pick only the tests you need.
            bundles.Add(new ScriptBundle("~/bundles/modernizr").Include(
                        "~/Scripts/modernizr-*"));

            bundles.Add(new StyleBundle("~/Content/css").Include("~/Content/site.css"));

            bundles.Add(new StyleBundle("~/Content/themes/base/css").Include(
                        "~/Content/themes/base/jquery.ui.core.css",
                        "~/Content/themes/base/jquery.ui.resizable.css",
                        "~/Content/themes/base/jquery.ui.selectable.css",
                        "~/Content/themes/base/jquery.ui.accordion.css",
                        "~/Content/themes/base/jquery.ui.autocomplete.css",
                        "~/Content/themes/base/jquery.ui.button.css",
                        "~/Content/themes/base/jquery.ui.dialog.css",
                        "~/Content/themes/base/jquery.ui.slider.css",
                        "~/Content/themes/base/jquery.ui.tabs.css",
                        "~/Content/themes/base/jquery.ui.datepicker.css",
                        "~/Content/themes/base/jquery.ui.progressbar.css",
                        "~/Content/themes/base/jquery.ui.theme.css"));

            bundles.Add(new ScriptBundle("~/bundles/jquerysignalr").Include(
                   "~/Sciprts/json2.js",
                   "~/Scripts/jquery.signalR-{version}.js"));
        }
    }
}

Now it’s time to add the routing information for the chat application. Open up the RouteConfig class and add the following route:

RouteTable.Routes.MapConnection<Chat.ChatConnection>("chat", "chat/");

Finally, set the default controller action to ChatR:

routes.MapRoute(
                name: "Default",
                url: "{controller}/{action}/{id}",
                defaults: new { controller = "Home", action = "ChatR", id = UrlParameter.Optional }

Here’s the completed RouteConfig class:

using System.Web.Mvc;
using System.Web.Routing;

namespace SignalRRevisited.App_Start
{
    public class RouteConfig
    {
        public static void RegisterRoutes(RouteCollection routes)
        {
            RouteTable.Routes.MapConnection<Chat.ChatConnection>("chat", "/chat");
            routes.IgnoreRoute("{resource}.axd/{*pathInfo}");

            routes.MapRoute(
                name: "Default",
                url: "{controller}/{action}/{id}",
                defaults: new { controller = "Home", action = "ChatR", id = UrlParameter.Optional }
            );
        }
    }
}

The SignalR Chat application is now finished, and you should be able to send chat messages across multiple browser windows, as seen in Figure 2.

[Click on image for larger view.]Figure 2. The completed SignalR chat application.

That’s a rundown of some of the changes in the SignalR Persistent Connection API from the release candidate to the full release version. The main changes are the omission of the Async suffix to the base PersistentConnection class methods. In addition, the route configuration no longer needs a wildcard.


Microsoft Releases Previews of Visual Studio 2013, .NET Framework 4.5.1

Courtesy Keith Ward, VisualStudioMagazine

Microsoft announced preview editions of two of its most important developer-related products at today’s Build conference in San Francisco: Visual Studio 2013 and the Microsoft .NET Framework 4.5.1.

The preview of Visual Studio 2013 isn’t a surprise, as Microsoft announced earlier this month at TechEd that it was coming. Microsoft said at that time that it was looking at a fall time frame for an official release, and didn’t offer any different schedule at Build. The update of the .NET Framework, however, was a surprise, as it hadn’t been hinted at at all.

Previously, Microsoft Corporate VP of the Developer Division S. Somasegar noted in a blog post that Visual Studio 2013 focuses on "business agility, quality enablement and DevOps." Microsoft Technical Fellow Brian Harry has written before on application lifecycle workflow changes in Visual Studio 2013, including numerous enhancements such as agile portfolio management, version control, coding, testing, release management and team collaboration.

It’s unusual for Microsoft to do major updates to a key product two years in a row, but it does fall in line with CEO Steve Ballmer’s emphasis during his keynote about "rapid release." It was a theme Ballmer turned to repeatedly throughout his speech.

The .NET Framework 4.5.1 update has many changes, somewhat surprising for an incremental upgrade. Somasegar detailed the upgrades to both products in a long blog entry today. He called .NET 4.5.1 "a highly compatible, in-place update for .NET 4.5" that’s bundled with Visual Studio 2013 Preview and Windows 8.1 Preview. It can also be installed with Windows 8, Windows 7, Windows Vista and the corresponding Windows Server releases.

A big focus of the latest version of the .NET Framework, according to Somasegar, is debugging and diagnostics. He pointed to the example of viewing method return values in the debugger, which is now built into both .NET 4.5.1 and Visual Studio 2013. Another example is the ability to "Edit and Continue" in 64-bit processes. It enables developers to alter running .NET code while stopped at a breakpoint in the debugger, without the need to stop and restart.

One of the improvements in Visual Studio 2013 that got the biggest applause during the keynote was the addition of "call context." Somasegar explained the details in his blog:

Previously, it could be very difficult for a developer stopped at a breakpoint to know the asynchronous sequence of calls that brought them to the current location. Now in Visual Studio 2013, the Call Stack window surfaces this information, factoring in new diagnostics information provided by the runtime. Further, when an application stops making visible forward progress, it’s often difficult to diagnose and to understand what asynchronous operations are currently in flight such that their lack of completion might be causing the app to hang. In Visual Studio 2013, the Tasks window (formerly called Parallel Tasks in Visual Studio 2010 and Visual Studio 2012) now includes details on these async operations so that you can break into your app in the debugger and easily see and navigate to everything that’s in flight.

Other Visual Studio 2013 improvements include standards support in C++11; performance jumps for XAML in Windows Store apps; better async debugging of JavaScript; IntelliSense support in DOM Explorer; and enhancements to the JavaScript Console.

Microsoft Technical Fellow Harry said in a blog entry that Visual Studio 2013 Preview — which includes the .NET Framework 4.5.1 Preview — is a "go-live" version, meaning that Microsoft will provide support for use in production environments. Be forewarned, though: "I do expect there are some bugs," he said. He also mentioned that Visual Studio 2013 is a side-by-side install, so it should be safe to install and use with another version of Visual Studio on the same computer.

The Visual Studio 2013 and .NET Framework 4.5.1 downloads are available here.


ASP.NET, Web Tools Get an Update

Courtesy Keith Ward, VisualStioMagazine

Along with the new release of Visual Studio 2013 preview comes new tooling and a new version of ASP.NET, Microsoft’s framework for building Web sites and Web applications.

The updates amount to more of a refresh of ASP.NET and Web Tools, rather than a major upgrade. The tools are bundled into VS 2013 preview, so there’s no need to download them separately for those running the preview.

One of the more obvious upgrades is to the main ASP.NET UI. Called "One ASP.NET," the interface offers a number of templates under one umbrella, including Web Forms, MVC (model-view-controller), Web API, SPA, Facebook and mobile. The release notes for the upgrade state that One ASP.NET takes a "step towards unifying our set of experiences so that you should be able to achieve the same set of functionality no matter how you started building your ASP.NET application."

Other updated or new tools include:

  • ASP.NET Identity, a set of tools for authentication in ASP.NET applications.
  • ASP.NET Web Forms, a foundational technology for building drag-and-drop sites.
  • ASP.NET MVC 5, which uses patterns-based methods to separation business, input and UI logic.
  • ASP.NET Web API 2, for building HTTP services and RESTful applications.
  • Scaffolding, a new code generation framework for MVC, Web Forms and Web API projects.
  • Entity Framework, with a beta of version 6.0.
  • ASP.NET SignalR 2.0 beta 2, which includes support for Xamarin’s MonoTouch and MonoDroid cross-platform tools, and a portable .NET Client Library.

The preview released today is not officially supported by Microsoft, so developers should take that into account before installing in production environments.


How Hackers Beat The NSA

Courtesy GregoryFerenstein, TechCrunch

FILE PHOTO  NSA Compiles Massive Database Of Private Phone Calls

While the world parses the ramifications of the National Security Agency’s massive snooping operation, it’s important to remember an earlier government attempt at data collection and, more important, how a group of hackers and activists banded together to stop it.

In the early 1990s, the military was petrified that encryption technologies would leave them blind to the growing use of mobile and digital communications, so they hatched a plan to ban to place a hardware patch that gave the NSA backdoor wiretap access, the so-called “Clipper Chip“.

After hearing about the plan, a grassroots cabal of hackers, engineers, and academics erupted in protest, sparking a nationwide campaign to discredit the security and business implications of the Clipper chip, ultimately bringing the NSA’s plans to a screeching halt.

Now, the anti-authority community of programmers and tech execs are gearing up for another fight against the NSA’s top-secret Internet Snooping apparatus, PRISM, and there are some important lessons they could learn from their victorious predecessors.

A Clash Of Tech And Culture

MYK-78_Clipper_chip_markings

The MYK-78, a.k.a “Clipper Chip”

Intelligence agencies were as eager to monitor the digital schemings of terrorists during the days of Full House as they are today. Worried that the U.S.’ brilliant academic minds would inadvertently arm its enemies with cutting edge encryption, it banned the export of any technologies that could conceal communication.

“If you simply took this technology and released it widely, you were also potentially creating an opportunity for very small terrorist groups, criminals and the like to use this technology to get a kind of perfect information security,” recalls former NSA Attorney General, Stewart Baker.

So, encryption programs on the early Internet browsers were officially treated like a munition, like a missile rocket or sniper scope. This is why you often saw mentions of nuclear weaponry in terms of service for programs using cryptography.

The ban wasn’t sustainable because a quickly growing segment of shopaholics wanted the ability to safely buy Captain Planet t-shirts through the World Wide Web, so the NSA knew it couldn’t hold back the entirety of secure e-commerce for national security purposes. As a first step in allowing technology exports, the Clinton White House lobbied for a pencil-eraser-size hardware patch that would, at the very least, allow intelligence agencies the ability to extend their cherished practice of wiretapping to Zack Morris-style cellular telephones.

Ultimately, the plan was defeated by the very same contingent of technologists and businesses that are fighting the NSA’s PRISM program. “Every technology has with it a predominant ideology part of the culture,” says Baker. “There is a predominant ideology that is handed down from professor to student that says, you know, ‘we have to lean against abuses of this technology to make the state stronger’…people will write code that maximizes individual autonomy and reduces the authority of the government.”

Baker recalls how a coordinated ideological effort, with alternative encryption software, academic attacks on Clipper’s vulnerabilities, and big business lobbying took town the NSAs plan.

Money Talks

“A subculture clash became a battle between Microsoft at the height of its powers and a national security establishment,” recalls Baker, who argues that the need to export products, especially for e-commerce, compelled the business community to win over members of Congress.

Ray Ozzie, Microsoft’s former Chief Software Architect, testified before Congress, and let vulnerable members know that encryption regulations could have cost them between $6 and 9 billion in lost annual revenue. It worked.

“The Government should not be in the business of mandating particular technologies,” said career Senator, Patrick Leahy (still in office).

“They go to the White House, they go to congress, and they explain how it’s going to hurt their business,” adds Steven Levy, who wrote the book on the Clipper Chip wars, Crypto.

Even more than in the ’90′s, the technology industry has close friends in government. The Bay Areafundraised more for Obama than either Hollywood (LA) or Wall Street (New York). Silicon Valley’s massive DC presence is paying off: Google’s intensive lobbying of the Federal Trade Commission’s monopoly charges got their potential multi-million dollar fine reduced to a stern warning.

Already we’re seeing Google’s request to disclose more data on NSA spying practices pay off: the Obama administration has indicated that it may loosen the gag order over which details it can publicize. The industry is just beginning to fight, but Silicon Valley paid big bucks during the campaigns and they have favors waiting to be called in.

The First Amendment Is Your Friend

Lobbying alone didn’t topple the Clipper Chip and export controls. Three months before the White House caved into the tech industry, the Ninth Circuit of appeals struck down export controls on First Amendment grounds.

“Government efforts to control encryption thus may well implicate not only the First Amendment rights of cryptographers intent on pushing the boundaries of their science, but also the constitutional rights of each of us as potential recipients of encryption bounty,” explained the landmark Bernstein vs. US Department of Justice decision.

Though the government officially appealed the ruling, it knew it had a weakened position. “Then the government came to us and said, ‘We want to settle the case’.,” says John Gilmore, founder of the Electronic Frontier Foundation.

Today, the issue of government phone and internet snooping is largely a First Amendment issue. The NSA has gagged both senators and tech companies alike from talking about the program.

There is fierce disagreement over whether email spying has produced results. While the NSA claims that it helped stop the 2009 New York City subway bombing plot, public documents indicate that law enforcement got their best tip from documents in a hard drive, recovered by police in the course of normal investigations.

Google has filed a First Amendment complaint with the Attorney General and Senator Leahy has proposed legislation to disclose more info to members of congress (yes, intelligence info is even hidden from Congress).

So, when tech companies and civil liberty groups sue the government, know that they have a history of winning.

Build Tools Like the Dickens

The nail in the coffin for Clipper was the discovery of its inescapable vulnerabilities. Renowned hacker and Bell Labs engineer, Matt Blaze, “uncovered a flaw in Clipper that would allow a user to bypass the security function of the chip,” wrote Levy, back in 1994. Clipper wasn’t just a backdoor for the government, but any hacker who took over its weak security wall.

On the offense, super-programmers were building free, open source encryption tools, such as Philip Zimmerman’s “Pretty Good Privacy,” which allowed better public oversight of their vulnerabilities and weren’t subject to export regulations. In other words, the government couldn’t stop the grassroots hacker community from spreading the very technology that it aimed to stop.

Today, tools for subverting the NSA have had limited appeal. There’s TOR for secure Internet browsing, and Redphone for secure calling, but they either require everyone to be using the same software or have complex implementations.

“Cryptography isn’t easy and the concepts behind it are not easy to understand. Generally, hiding the complexity of the problem only puts the user at greater risk,” says the TOR project’s Andrew Lewman.

So, while citizens have tech companies and civil rights organizations on their side, the 4th Amendment needs a good user-interface designer.

A Winnable Fight

If history tells us anything, the fight against NSA secrecy is a winnable. Intelligence leaders are ruled by elected officials, military practices are still susceptible to the courts, and hackers can create tools to mask users from broad Internet snooping. Every citizen, whether they vote, support a civil liberties organization, and builds encryption tools, has a role to play.

As John Gilmore reminds me, “the one advantage we have over the NSA is that there are a lot more of us than there are of them.”


Rath of God Type Stuff: Microsoft and Oracle working together

Courtesy Barb Darrow, Gigaom

steveballmer

SUMMARY:

Starting now Oracle customers can run their databases and applications on Microsoft Hyper-V and Windows Azure, not only with Oracle’s blessing but its certification.

Get ready for the skies to rain frogs: As of now, Oracle will certify and support Oracle databases, along with its applications, Oracle Linux, and Java to run on Microsoft Hyper-V and Windows Azure platforms. And Oracle customers can run their existing Oracle-licensed software on Azure as of now. The news was announced Monday by Microsoft CEO Steve Ballmer, Windows and Tools group president Satya Nadellaand Oracle co-president Mark Hurd.

The two companies, which are long-time rivals in the database and middleware world, will work together to certify those applications on those platforms and execs on both sides said the pact was driven by joint customers of the companies.

It also represents a dramatic step for Oracle, which in the past has strongly discouraged customers from running anything but Oracle VM virtualization. In fact, it often would push back on support calls and ask customers to prove that their problem was related to Oracle and not to any third-party virtualization. That tactic typically went over like a lead balloon. Now, Hyper-V is clearly a near-first class citizen in Oracle’s world and that alone is worth a headline.

Azure already supported Java but via the Open JDK, an open source iteration of Java, Nadella said. “With this we have the official versions, licensed and supported from Oracle directly as part of their middleware stack as well as applications,” he said.

This deal is both bigger than and less than what had been anticipated. Last week, Oracle CEO Larry Ellison indicated on the company’s earnings call that third parties “like Microsoft” would utilize new multitenancy and other goodies in the upcoming Oracle 12C database, in their cloud offerings. No mention of that was made on Monday, although it is still possible. But folks (ahem, that would be me) expected today’s news to be about Oracle’s databases running on Windows Azure – and the fact that its WebLogic application server and applications would run there too was a surprise — even though it shouldn’t have been.

As should be expected, execs on both sides of the deal touted hybrid cloud as the model most enterprises will embrace, because it will let them keep some of their data and IP in-house while taking advantage of public cloud resources when needed. Hurd also said Oracle would continue to build its own “open cloud” efforts.

But to me this looks like an alliance designed to fend off further poaching of enterprise workloads by Amazon Web Services, the world’s largest public cloud and could also be seen as a counterweight to VMware which is trying to parlay its lead in server virtualization within company data centers to the cloud with its new vCloud Hybrid service.

As 451 Group Analyst Carl Brooks pointed out, companies have been able to run Oracle databases on AWS for some time. The hindrance to adoption, however, has been that Oracle and Microsoft licensing makes it more attractive for users to opt for other, lower priced options — Ubuntu or other Linux instead of Windows on the OS side or MySQL not Oracle for database. The companies did say they will offer pay-per-use options as well as the ability to move existing licenses to Azure, so we’ll have to see just how competitive those options are.

Until these enterprise software companies make it both price competitive and easy to run their software in the cloud, they will continue to struggle with this deployment model and could see more enterprise workloads flow to Amazon’s public cloud.

This story was updated throughout the conference call with additional information.