Courtesy Patrick Steele, VisualStudioMagazine.com
While typing in a URL and pressing Enter inside a browser is effortless, there’s a lot more going on behind the scenes. In this article, I’ll review some of the basics of HTTP and show how they’re being used in today’s modern, REST-based applications.
Tools of the Trade
Examining the raw HTTP traffic that goes back and forth between a client and server is actually quite easy these days. Many modern browsers allow you to view all of the network traffic they transmit and receive. I wrote an article earlier this year on using Chrome’s developer tools to examine network traffic (using the "Network" panel). If you’re running outside the browser, you may want to consider Fiddler2. It’s a standalone Windows Forms application that can monitor and display traffic from any running application, as shown in Figure 1.
[Click on image for larger view.]Figure 1. Sample Fiddler2 screen.
There are two main HTTP protocols: Version 1.0 and Version 1.1. I could spend an entire article going over the differences between the two. For brevity’s sake, the key takeaway is that HTTP 1.1 was designed to be backward-compatible with 1.0. The 1.1 specification took the (never formally specified) 1.0 protocol and added some enhancements and clarifications. For this article, I’ll be describing the 1.1 protocol, as many REST-based services require the 1.1 protocol.
HTTP is a very simple protocol — by design. A simpler protocol is easy to implement and, therefore, easier to adopt. Obviously, this simplicity has paid off: HTTP is used by millions of applications every day, most of those Web browsers (on both regular and mobile devices).
Every HTTP message is single request and a single response. That’s it. No complicated sequences or any special handshaking. A client opens a connection (by default, on port 80) and sends a request to the server. The server processes the request and sends back a response. The connection is closed. This is done for every message. Obviously, with a connection being opened and closed each time, HTTP is a stateless protocol. Any state information to be shared between client and server must be retransmitted on each request. By being stateless, subsequent HTTP requests don’t need to go to the same server as the first request (this makes a load balancer’s job much easier).
If you’ve ever done any Web forms development and seen the giant hidden field called "__VIEWSTATE," that’s an example of state transmission. The state of every control on a Web forms page is passed back and forth. This allows the server to "rehydrate" the state of the Web form, act on the request and send back a complete "state" of the new Web form. This response is then processed by the browser by rendering an updated page.
Courtesy Ondrej Balas, VisualStudioMagazine.com
With the release of Visual Studio 2013, there are plenty of new things to try out. In this article, I’ve highlighted the eight features which I consider to be the most helpful all around. Whether you work by yourself or on a team, these new features can save you time and improve your development experience. Note that there may be slight discrepancies between what you see in this article and what you see in Visual Studio 2013, as this article was written based on the latest release candidate (RC) of Visual Studio 2013, which became available in October 2013.
1: Prototyping and Throw-Away Applications One of the first things you may notice when starting Visual Studio 2013 is that the “New Project” window looks a bit different, as shown in Figure 1. When creating a new project, the location and name of the solution no longer have to be set immediately. In previous versions, the project’s name and location had to be set before it was created, even if you were just testing something out and planned to delete it right away. For those who create many new projects, that meant either an extra step to delete the project or having a cluttered project directory.
Visual Studio now works more like Microsoft Office applications, allowing you to create a project and start coding, and defer the decision of if/where to save it.
2: Peek Definitions Peek Definitions lets you take a quick peek at a class or method definition without opening the file. You may be accustomed to hitting F12 to go to an object’s definition, but now if you hit Alt+F12 instead, you will be able to take a peek at the definition just below its usage. As shown in Figure 2, I had my cursor on “prod.Name” when I hit Alt+F12, bringing up the definition of the Product class right inside my code window.
3: Improved Navigation and Search Along the lines of Peek Definitions, you can also try hitting CTRL+, which will behave differently depending on the position of your cursor. If the cursor is on a blank line, you can just start typing anything and it will begin searching for what you’re typing. If the cursor is on some code, that object will be automatically typed into the search box to get you started. See Figure 3 for a look at what that search function looks like. You can then use your arrow keys or mouse to navigate to any of the results of the search.
4: CodeLens CodeLens is a feature that will be available only in Visual Studio Ultimate edition (more on the controversy here), but adds some very useful behavior to the editor. By default, the number of times a property or method is referenced by your code is shown above that property or method. This information can be helpful when changing existing code, since you’ll know if the method is called from many places or just a handful. In larger projects, especially ones where you’re not familiar with the entire codebase, this can be a big time saver. The number of references is shown in a small font above each property or method on your class, as shown in Figure 4. Clicking on the number of references will show you all of the methods that call it, making it easy to find and navigate to those sections of your code.
While not shown here, CodeLens also has some nice Team Foundation Server (TFS) integration. If you’re using TFS, it will allow you to see commit history and unit tests targeting the code in question.
5: Scroll Bar Customization In Visual Studio 2013, the scroll bar can now be customized to give you a better overview of large files. It can be set to show various annotations, such as changes, errors, breakpoints and more. Optionally, it can be set to “map mode,” which will give you a zoomed-out representation of your code right on the scroll bar itself. The difference between bar mode (the default) and map mode can be seen in Figure 5.
The options screen, as shown in Figure 6, is accessible by going to Tools > Options > Text Editor > All Languages > Scroll Bars.
6-8: Minor Tweaks One of the smaller tweaks to Visual Studio is in the options screen itself. Take a look at Figure 7. Do you recognize that screen? The options screen not being resizable has been a pet peeve of mine for a long time, and I’m elated to know that it’s now resizable.
In the code window, you can also now easily move lines or blocks of code up or down. Try highlighting an entire line of code, and press ALT+UP or ALT+DOWN. The entire line will be shifted up or down — another great time saver.
Finally, if you code from multiple machines, the new synchronization features can be of great benefit. If you choose to sign in with your Microsoft account when using Visual Studio, and you sign in from multiple computers, your environment settings — like keyboard shortcuts and theme — will be synchronized between instances automatically.
What Are Your Favorite Changes? Microsoft continues to make Visual Studio the best (in my opinion) development IDE out there. The changes in Visual Studio 2013 really show that Microsoft cares about making the developer experience better with each release, even in relatively insignificant areas like the options screen. Take some time to try out the new features, and hopefully use them to make writing code more enjoyable. Let me know which ones you like best in the comments below.
Courtesy Peter Bright, ArsTechnica.com
Security company RSA was paid $10 million to use the flawed Dual_EC_DRBG pseudorandom number generating algorithm as the default algorithm in its BSafe crypto library, according to sources speaking to Reuters.
The Dual_EC_DRBG algorithm is included in the NIST-approved crypto standard SP 800-90 and has been viewed with suspicion since shortly after its inclusion in the 2006 specification. In 2007, researchers from Microsoft showed that the algorithm could be backdoored: if certain relationships between numbers included within the algorithm were known to an attacker, then that attacker could predict all the numbers generated by the algorithm. These suspicions of backdooring seemed to be confirmed this September with the news that the National Security Agency had worked to undermine crypto standards.
The impact of this backdooring seemed low. The 2007 research, combined with Dual_EC_DRBG’s poor performance, meant that the algorithm was largely ignored. Most software didn’t implement it, and the software that did generally didn’t use it.
One exception to this was RSA’s BSafe library of cryptographic functions. With so much suspicion about Dual_EC_DRBG, RSA quickly recommended that BSafe users switch away from the use of Dual_EC_DRBG in favor of other pseduorandom number generation algorithms that its software supported. This raised the question of why RSA had taken the unusual decision to use the algorithm in the first place given the already widespread distrust surrounding it.
RSA said that it didn’t enable backdoors in its software and that the choice of Dual_EC_DRBG was essentially down to fashion: at the time that the algorithm was picked in 2004 (predating the NIST specification), RSA says that elliptic curves (the underlying mathematics on which Dual_EC_DRBG is built) had become “the rage” and were felt to “have advantages over other algorithms.”
Reuters’ report suggests that RSA wasn’t merely following the trends when it picked the algorithm and that contrary to its previous claims, the company has inserted presumed backdoors at the behest of the spy agency. The $10 million that the agency is said to have been paid was more than a third of the annual revenue earned for the crypto library.
Other sources speaking to Reuters said that the government did not let on that it had backdoored the algorithm, presenting it instead as a technical advance.
Courtesy Sean Gallagher, ArsTechnica.com
“What exactly did you say you were trying to do again?” the ranger asked me as we stood on a seawall at Fort McHenry, taking turns winding in hundreds of feet of kite string attached to a nine-foot kite.
The kite, a nine-foot delta wing, had landed near a channel marker buoy and was now a nine-foot delta wing sea anchor. Tethered to it was a modified plastic food container encasing a very wet Android phone that was never intended to be a submersible. As we pulled the kite in, I asked myself the same thing—what the hell was I doing?
What I was trying to do was replicate what the military, government agencies, and private companies typically do with satellites, aircraft, and drones: get a bird’s-eye view of the Earth’s surface and create a photographic map.
Instead, my attempt at do-it-yourself aerial mapping quickly turned into a fiasco involving a squadron of US Park Service rangers, a few dozen puzzled tourists, and the flagpole that stood where the Star Spangled Banner once flew. I only mapped the limits of my own sense of humor, the patience of the National Park Service, and the contours of the bottom of the Patapsco River. An effort I planned since August was starting to look like a complete failure.
Fortunately, dear reader, I am not easily deterred.