.net programming, computers and assorted technology rants

Archive for March, 2013

Cliplets: Part Photo and Part Video

Check this out…really.

Microsoft New Research:Cliplets.

Advertisements

Currency Of The Revolution, Or Tool For Online Vendors? The Many Faces Of Bitcoin

Courtesy MakeUseOf

about bitcoins

It’s become an annual event: the fall of Bitcoin. You’ve probably read about it multiple times, and maybe even believe that the online, decentralized currency is already gone forever.

It isn’t.

Created by a mysterious, anonymous entity back in 2009 – when the recession was at its worst – Bitcoin is a completely digital currency with no central servers. Transactions are distributed across the network of users, and developers claim counterfeiting to be impossible.

Anyone who wishes can mine Bitcoins by putting their computer to work solving complex math problems, but the total number of Bitcoins that can be created is capped. The currency is designed to increase in value as demand for Bitcoins go up, and it has – occasionally too quickly to be sustainable. By the summer of 2011, for example, one Bitcoin was worth as much as $30 USD. Within weeks it fell as low as $5 – just one incident the media has called the death of Bitcoin.

These spikes and drops – accelerated by speculators – haven’t stopped Bitcoin’s value from rising over time. Outliers aside, the currency’s been trending upwards since it’s creation in 2009. While I’m writing this a single Bitcoin is worth over $48 USD – the most it’s ever been worth. This just days after a glitch temporarily brought the price down to $37.

It’s unclear whether the current value will hold, but major online companies – including Reddit and WordPress – are starting to accept the currency, increasing its legitimacy.

Around the world organizations shut out by traditional merchant processors – Wikileaks, lulzsec and the creator of a 3D printable gun, for example – are turning to the digital currency for donations. And online casinos based entirely around Bitcoin are bringing in serious revenue. A vibrant community of developers, users and enthusiasts surround Bitcoin, and more than a few exchanges around the world allow anyone to trade a conventional currency for the digital one.

So yeah: Bitcoin’s not dead.

Read More…

http://www.makeuseof.com/tag/currency-of-the-revolution-or-tool-for-online-vendors-the-many-faces-of-bitcoin-feature/


Who needs Paperboys? Replace them with Drones

Courtesy Gizmodo

Drones Could Replace Eager Youths On Paper Routes

Innovation in newspaper delivery techniques hasn’t really seemed like a priority in awhile because of the whole death of print thing and whatever. But since drones categorically improve all situations, a local French postal service is turning paper routes into air routes.

The postal service "La Poste Groupe" is using Auvergne, France as a test province for a new drone delivery program that will employ Parrot quadricopters to deliver local papers. Tests officially begin in May and will consist of 20 drones being controlled by 20 postal workers from iOS or Android devices. The goal is for the system to be ready for 7am deliveries.

The program faces problems with battery life and range, because the quadricopters only have a range of about 164 feet and 30 minutes of flight time. No word on whether privacy concerns are being viewed as…a concern. [La Poste via VentureBeat]


Apple Developers Rain Down on iCloud

Courtesy Ars Technica

Aurich Lawson

Apple’s iCloud is marketed to us end users as a convenient and centralized way to manage data on all of our Macs and iOS devices: sync contacts and bookmarks, re-download music and apps, back up iOS devices, and sync documents and data for third-party apps as MobileMe did. The last item, syncing of documents and data, is one of the least glossy features of iCloud, but it is one of the most important, and it should be among the most straightforward. Right?

Perhaps not. Almost a year after Apple shut down MobileMe for good in favor of iCloud, third-party developers have begun to speak out about the difficulty involved in working with Apple’s cloud service. A piece published at The Verge this week highlights many of those complaints, with quotes coming from well-known developers and anonymous sources alike about the challenges faced by the developer community. From data loss and corruption to unexpected Apple ID use cases, developers have seen it all—but are stymied by the persistence of problems that prevent them from shipping products with working iCloud support.

What’s the big problem, exactly? According to Bare Bones Software’s Rich Siegel, there are a number of moving parts to iCloud that all affect how things come out on the other end.

"In concept, the service is pretty simple. A central iCloud server holds the truth: the canonical version of the user’s data for an app. As the user manipulates an app’s data, iCloud tracks and reconciles the changes into the central truth and makes sure that all copies of the data, on each computer, are brought up to date," Siegel told Ars. "In order for this to work, though, a lot has to happen behind the scenes. What we casually refer to as iCloud is many parts, each with a role to play."

Indeed, there are multiple ways in which iCloud enables the syncing of data, though both users and developers are kept in the dark when things go wrong. Siegel described scenarios in which iCloud simply declares that a file upload has timed out ("Apart from not being semantically relevant, the message is also unhelpful because it doesn’t provide any information that either the user or developer can apply to diagnose and resolve the problem"), or says that corrupted baselines are causing sync problems without making the problem visible, or just plain barfs up an opaque, internal error. This has resulted not just in headaches for developers, but also in inconvenience, confusion, and even anger on the part of end users, who go on to rate applications poorly because of these symptoms.

"When it fails, there’s no option to recover—all you can do is try again (and again…) until it finally works. And when it does initialize successfully, it can take an extremely long time," Siegel said. "In some cases, we’ve seen delays of up to 25 minutes while waiting for the iCloud stack to initialize. There’s no discernible consistency or rationale for when it says no and when it finally says yes… you can just keep trying, and eventually it might just work, or not."

Opaque errors are just the beginning—developers are also frustrated with how iCloud handles a user’s data if the user chooses to turn off document and data syncing. Doing this, it turns out, completely removes a user’s locally stored iCloud data. And signing out of iCloud results in the system moving iCloud data outside of an application’s sandbox container, making it impossible for the app to use the data any longer. The assumption here is clear: you’re either using iCloud exclusively for data storage or you don’t want to use that data at all.

Indeed, Core Data is one of the main parts of iCloud causing headaches for developers. Black Pixel recently mentioned its own Core Data problems in a blog post about the future of NetNewsWire’s syncing capabilities. "As far as sync is concerned, we knew we would likely need an alternative to Google Reader as early as last year. At the time, the option that seemed to make the most sense was to embrace iCloud and Core Data as the new sync solution of choice. We spent a considerable amount of time on this effort, but iCloud and Core Data syncing had issues that we simply could not resolve," wrote Black Pixel’s Daniel Pasco.

Another developer Michael Göbel wrote in a blog post titled "Why all my iOS Apps are on hold": "Core Data and iCloud sync are still a joke. I can’t count the number of developers and companies that all ran into the same trouble and finally gave up—meaning they dropped iCloud support completely after hundreds of thousands of users lost their data."

Siegel expanded a bit upon some of the problems Core Data presents. "This is where the rubber meets the road for database-backed applications," he said. "Core Data is the application-level database framework supplied by OS X and iOS that provides the means for applications to store items, and data about those items, in a single database."

Returning to the iCloud signout problem, he explained how his company ran into problems dealing with the limitations of Core Data and sandboxing with its product Yojimbo.

"The recovery from iCloud signout involves taking the opportunity to migrate all of your Core Data storage from ‘Mobile Documents’ to the private sandbox container on your Mac. We found, to our dismay, that the practical reality didn’t hold up to theory—part of the problem is that you don’t get notified until after the data has been made inaccessible, and once in that state, there’s no choice but to use Core Data to make a copy of the data that’s just been sequestered," Siegel told Ars. "And of course, given a database of sufficient size, the process of using Core Data to relocate the database ties up the application in an unresponsive state, without visible progress, for as long as it takes. (And woe betide you if something goes wrong in the middle of it.)"

These are only some of the issues iCloud has presented to third-party developers, and Apple reportedly has not been effective. Some—including Black Pixel—have begun to create their own syncing services, while others opt to rely on other solutions like Dropbox. Others still are holding out hope that Apple will hear their cries and offer some help. "We and other affected developers are continuing to iterate with Apple regarding the technical problems we’ve run into. However, if iCloud sync can’t be made to work, perhaps another service will do the job," Siegel said.


How Spamhaus’ attackers turned DNS into a weapon of mass destruction

Courtesy Ars Technica

Aurich Lawson

A little more than a year ago, details emerged about an effort by some members of the hacktivist group Anonymous to build a new weapon to replace their aging denial-of-service arsenal. The new weapon would use the Internet’s Domain Name Service as a force-multiplier to bring the servers of those who offended the group to their metaphorical knees. Around the same time, an alleged plan for an Anonymous operation, "Operation Global Blackout" (later dismissed by some security experts and Anonymous members as a "massive troll"), sought to use the DNS service against the very core of the Internet itself in protest against the Stop Online Piracy Act.

This week, an attack using the technique proposed for use in that attack tool and operation—both of which failed to materialize—was at the heart of an ongoing denial-of-service assault on Spamhaus, the anti-spam clearing house organization. And while it hasn’t brought the Internet itself down, it has caused major slowdowns in the Internet’s core networks.

DNS Amplification (or DNS Reflection) remains possible after years of security expert warnings. Its power is a testament to how hard it is to get organizations to make simple changes that would prevent even recognized threats. Some network providers have made tweaks that prevent botnets or "volunteer" systems within their networks to stage such attacks. But thanks to public cloud services, "bulletproof" hosting services, and other services that allow attackers to spawn and then reap hundreds of attacking systems, DNS amplification attacks can still be launched at the whim of a deep-pocketed attacker—like, for example, the cyber-criminals running the spam networks that Spamhaus tries to shut down.

Hello, operator?

The Domain Name Service is the Internet’s directory assistance line. It allows computers to get the numerical Internet Protocol (IP) address for a remote server or other network-attached device based on its human-readable host and domain name. DNS is organized in a hierarchy; each top-level domain name (such as .com, .edu, .gov, .net, and so on) has a "root" DNS server keeping a list of each of the "authoritative" DNS servers for each domain registered with them. If you’ve ever bought a domain through a domain registrar, you’ve created (either directly or indirectly) an authoritative DNS address for that domain by selecting the primary and secondary DNS servers that go with it.

When you type "arstechnica.com" into your browser’s address bar and hit the return key, your browser checks with a DNS resolver—your personal Internet 411 service— to determine where to send the Web request. For some requests, the resolver may be on your PC. (For example, this happens if you’ve requested a host name that’s in a local "hosts" table for servers within your network, or one that’s stored in your computer’s local cache of DNS addresses you’ve already looked up.) But if it’s the first time you’ve tried to connect to a computer by its host and domain name, the resolver for the request is probably running on the DNS server configured for your network—within your corporate network, at an Internet provider, or through a public DNS service such as Google’s Public DNS.

There are two ways for a resolver to get the authoritative IP address for a domain name that isn’t in its cache: an iterative request and a recursive request. In an iterative request, the resolver pings the top-level domain’s DNS servers for the authoritative DNS for the destination domain, then it sends a DNS request for the full hostname to that authoritative server. If the computer that the request is seeking is in a subdomain or "zone" within a larger domain—such as http://www.subdomain.domain.com—it may tell the resolver to go ask that zone’s DNS server. The resolver "iterates" the request down through the hierarchy of DNS servers until it gets an answer.

But on some networks, the DNS resolver closest to the requesting application doesn’t handle all that work. Instead, it sends a "recursive" request to the next DNS server up and lets that server handle all of the walking through the DNS hierarchy for it. Once all the data is collected from the root, domain, and subdomain DNS servers for the requested address, the resolver then pumps the answer back to its client.

How DNS queries are supposed to work—when they’re not being used as weapons.

To save time, DNS requests don’t use the "three-way handshake" of the Transmission Control Protocol (TCP) to make all these queries. Instead, DNS typically uses the User Datagram Protocol (UDP)—a "connectionless" protocol that lets the server fire and forget requests.

Pump up the volume

That makes the sending of requests and responses quicker—but it also opens up a door to abuse of DNS that DNS amplification uses to wreak havoc on a target. All the attacker has to do is find a DNS server open to requests from any client and send it requests forged as being from the target of the attack. And there are millions of them.

The "amplification" in DNS amplification attacks comes from the size of those responses. While a DNS lookup request itself is fairly small, the resulting response of a recursive DNS lookup can be much larger. A relatively small number of attacking systems sending a trickle of forged UDP packets to open DNS servers can result in a firehose of data being blasted at the attackers’ victim.

DNS amplification attacks wouldn’t be nearly as amplified if it weren’t for the "open" DNS servers they use to fuel the attacks. These servers have been configured (or misconfigured) to answer queries from addresses outside of their network. The volume of traffic that can be generated by such open DNS servers is huge. Last year, Ars reported on a paper presented by Randal Vaughan of Baylor University and Israeli security consultant Gadi Evron at the 2006 DefCon security conference. The authors documented a series of DNS amplification attacks in late 2005 and early 2006 that generated massive traffic loads for the routers of their victims. In one case, the traffic was "as high as 10Gbps and used as many as 140,000 exploited name servers," Vaughan and Evron reported. "A DNS query consisting of a 60 byte request can be answered with responses of over 4000 bytes, amplifying the response packet by a factor of 60."

But even if you can’t find an open DNS server to blast recursive responses from, you can still depend on the heart of the Internet for a respectable hail of packet projectiles. A "root hint" request—sending a request for name servers for the "." domain—results in a response 20 times larger than the packet the request came in. That’s in part thanks to DNS-SEC, the standard adopted to make it harder to spoof DNS responses, since now the response includes certificate data from the responding server.

A comparison of a "root hint" query and the response delivered by the DNS server. Not all data shown.

Sean Gallagher

In the case of the attack on Spamhaus, the organization was able to turn to the content delivery network CloudFlare for help. CloudFlare hid Spamhaus behind its CDN, which uses the Anycast feature of the Border Gateway Protocol to cause packets destined for the antispam provider’s site to be routed to the closest CloudFlare point of presence. This spread out the volume of the attack. And CloudFlare was able to then shut off amplified attacks aimed at Spamhaus with routing filters that blocked aggregated DNS responses matching the pattern of the attack.

But that traffic still had to get to Cloudflare before it could be blocked. And that resulted in a traffic jam in the core of the Internet, slowing connections for the Internet as a whole.

No fix on the horizon

The simplest way to prevent DNS amplification and reflection attacks would be to prevent forged DNS requests from being sent along in the first place. But that "simple" fix isn’t exactly easy—or at least easy to get everyone who needs to participate to do.

There’s been a proposal on the books to fix the problem for nearly 13 years—the Internet Engineering Task Force’s BCP 38, an approach to "ingress filtering" of packets. First pitched in 2000 1998 as part of RFC 2267 , the proposal has gone nowhere. And while the problem would be greatly reduced if zone and domain DNS servers simply were configured not to return recursive or even "root hint" responses received from outside their own networks, that would require action by the owners of the network. It’s an action that doesn’t have a direct monetary or security benefit to them associated with it.

ISPs generally do "egress filtering"—they check outbound traffic to make sure it’s coming from IP addresses within their network.  This prevents them from filling up their peering connections with bad traffic.  But "ingress" filtering would check to make sure that requests coming in through a router were coming from the proper direction based on their advertised IP source.

Another possible solution that would eliminate the problem entirely is to make DNS use TCP for everything—reducing the risk of forged packets.  DNS already uses TCP for tasks like zone transfers. But that would require a change to DNS itself, so it’s unlikely that would ever happen, considering that you can’t even convince people to properly configure their DNS servers to begin with.

Maybe the attack on Spamhaus will change that, and core network providers will move to do more to filter DNS traffic that doesn’t seem to match up with known DNS servers. Maybe just maybe, BCP 38 will get some traction. And


File Server Uptime: 16.5 years. Can You Beat That?

Courtesy Ars Technica

It’s September 23, 1996. It’s a Monday. The Macarena is pumping out of the office radio, mid-way through its 14 week run at the top of the Billboard Hot 100, doing little to improve the usual Monday gloom.

Easing yourself into the week, you idly thumb through a magazine, and read about Windows NT 4.0, released just a couple of months previous. You wonder to yourself whether Microsoft’s hot new operating system might finally be worth using.

Then it’s down to work. Microsoft can keep its fancy GUIs and graphical server operating systems. NetWare 3.12 is where it’s at: bulletproof file and print sharing. The server, named INTEL after its process, needs an update, so you install it and reboot. It comes up fine, so you get on with the rest of your day.

Sixteen and a half years later, INTEL’s hard disks—a pair of full height 5.25 inch 800 MB Quantum SCSI devices—are making some disconcerting noises from their bearings, and you’re tired of the complaints. It’s time to turn off the old warhorse.

Connection Terminated. It seems almost criminal.

The server was decommissioned by one of our forum users, Axatax, as documented in this thread.

Sixteen and a half years is a long time. Can any of you beat it?


AmEx, DDoS and the Cyber-fighters of Izz ad-Din al-Qassam

Courtesy Ars Technica

On March 28, American Express’ website went offline for at least two hours during a distributed denial of service attack. A group calling itself "the cyber-fighters of Izz ad-Din al-Qassam" claimed responsibility for the attack, which began at about 3:00pm Eastern Time.

In a statement, an American Express spokesperson said, "Our site experienced a distributed-denial-of-service (DDoS) attack for about two hours on Thursday afternoon…We experienced intermittent slowing on our website that would have disrupted customers’ ability to access their account information. We had a plan in place to defend against a potential attack and have taken steps to minimize ongoing customer impact."

The American Express DDoS is part of a new wave of attacks started two weeks ago by the Izz ad-Din al-Qassam group, which launched a larger campaign targeting US financial institutions that began last September. The group’s alleged goal is to force the take-down of an offensive YouTube video—or extract an ongoing price from American banks as long as the video stays up, which could be indefinitely.

These attacks are also part of a larger trend of disruptive and destructive attacks on financial institutions by apparently politically-motivated groups, the most damaging of which was the attack on South Korean banks and other companies last week. It’s a trend that has surprised some security analysts, considering that the financial industry has focused more on advanced persistent threat (APT) attacks and cyber-espionage in recent years.

Band of the Hand

Named after a Muslim cleric who led The Black Hand, an anti-British and anti-Zionist jihadist organization in the 1920s and 1930s, and sharing a name with the military wing of Hamas (which the group’s statements claim it is tied to), Izz ad-Din al-Qassam has taken credit for a variety of attacks on US financial institutions over the past year, all allegedly in protest against the posting of trailers for the film The Innocence of Muslims on YouTube. Until the film is removed, the group said it would target  "properties of American-Zionist Capitalists…This attack will continue till the Erasing of that nasty movie." [sic]

Unlike DDoS attacks waged by Anonymous in the past, the Izz ad-Din al-Qassam group has used scripts running on compromised Web servers to launch their attacks rather than "volunteer" desktop PCs or botnets of compromised Windows machines. That allows  attacks to leverage larger amounts of available bandwidth.

So far, there have been three distinct phases of the group’s attacks. Dan Holden, director of Arbor Networks’ Security Engineering & Response Team, told Ars in a phone interview that the previous two waves lasted between three and four weeks, with the group then taking a break—likely to do the work required to maintain their botnet of compromised servers and add to it as their existing bots are discovered and disabled.

And during the course of each attack phase, the group has been refining its attacks, as Ars’ Dan Goodin reported earlier this year. In January, security firm Incapsula found a new variant of the group’s attack tools, which spawned additional copies of itself on compromised servers to multiply the size of attacks.

There have been further refinements made to this approach in this latest wave, Holden said. "The biggest change is the maintenance and the growth in the botnet," he explained. "There has been a big investment on their part to keep the campaign growing. And they’ve added some twists and techniques to their tools as time goes on, focusing their attacks more on the particular applications of the banks they’re targeting. Now there are particular tools being used for a specific set of banks."

That refinement is the result of months of analyzing the websites of each of the banks that Izz ad-Din al-Qassam has targeted.  Holden said that during its past large-scale attacks the group also crawled the websites of its targets and used the intelligence collected during the attacks to learn more about their weaknesses.

Covering fire

While the Izz ad-Din al-Qassam group’s attacks are apparently purely to disrupt banks’ ability to do business, there is some concern that such denial-of-service attacks could be used as a cover for fraud activity by criminals operating botnets or using targeted attacks on banks to gain access to internal systems.

"Financial institutions are putting a lot of resources into countering DoS attacks," said George Tubin, senior security strategist at Trusteer, a firm that specializes in countering online financial fraud. "But what we have seen in the past is the use of DoS attacks to conceal a fraud attack. They create the perfect cover." While the banks’ security resources are focused on trying to counter the DoS attack, he said, criminals could use other vectors to gain access to accounts and perform transactions in the background before they can be detected.

That’s not to say that there’s necessarily any collusion between the DoS attackers and any potential fraudsters, Tubin emphasized, although it was possible. "They could be coordinated, but they are also frequent enough and common enough that criminals could do their own targeted attack once they see a DoS on an institution."

And those targeted attacks are becoming increasingly costly to banks. An FBI fraud alert last September revealed that attackers had compromised several financial institutions by infecting the computers of employees with malware—including keyloggers and remote control software that allowed them to capture employees’ passwords, access customers’ accounts and make wire transfers ranging from $400,000 to $900,000.

A well-funded attack

Still, Holden said that it’s unlikely that criminals are "coat-tailing" on the Izz ad-Din al-Qassam group’s attacks just yet. "It would have to be one of the incidences where the attackers can tell the site is down, [but then they] wouldn’t be able to get in anyhow. So it’s not as likely."

But even if the group behind the attacks isn’t profiting from them, Holden said it’s clear that there are very real investments being made in their activities—maybe not in servers or hard assets, but in the form of countless hours of maintenance of the botnet by finding new servers to exploit, and further development of attacks.

"Regardless of who’s behind this," Holden said, "it has to be funded at some level. Even if it’s hacktivists, it’s got to be funded hacktivism." That, he says, is because of both the amount of time dedicated to the attack, and to its ongoing refinement. "It’s not that these are the most sophisticated things in the world," he explained, "but it has been getting more sophisticated, and it’s growing."

The goal of the investment in the botnet hasn’t been to create the sort of massive DDoS launched on Spamhaus this week. Rather, Holden said, the goal seems to have "mainly been around being able to attack multiple targets. They’re not interested in the biggest DDoS they can make—they’re more interested in creating constant pressure to prove whatever they’re trying to prove. They’re in it for the long haul."