.net programming, computers and assorted technology rants

Archive for October, 2013

Thieves install Keyloggers at Nordstroms get Credit Card Info

Courtesy Casey Johnston, ArsTechnica

A trio of men may have installed keyloggers at a Nordstrom department store in Florida to skim credit card numbers, reports KrebsOnSecurity. According to a police report, the men plugged standard keyloggers into the backs of cash registers and returned to remove them some days later with the alleged intent to use the information to create fake credit cards.

The keyloggers the thieves used imitate the look and design of PS/2 keyboard connectors, priced around $30-40. They are connected in series with a keyboard cord, between the computer and the keyboard, to intercept data transmitted between the two.

The Aventura, Florida police report states that the Nordstrom has security video footage of the three men entering the store and working as a team. Two distracted the staff while a third cased the registers and back of the computers. The men returned a few hours later and repeated their teamwork scheme, but this time the third man installed the keyloggers. They returned a third time to collect the keyloggers.

Krebs indicated that the keyloggers may not be able to skim data directly from a credit card reader, although some readers do use PS/2 connectors. If connected to the keyboard itself, the keyloggers likely would have been able to capture any credit card numbers typed in directly. A keylogger installed with the keyboard could, in theory, also have been able to capture applications for Nordstrom credit cards, which typically request personal information like names, addresses, birthdates, and social security numbers.


Your Face and Comments may appear in Google Shared Endorsements without you knowing

Courtesy Casey Johnston, ArsTechnica

A sampling of what Google’s shared endorsements will look like.

If you always wanted to see your shining face next to Google ads, your wish will soon be granted. Today Google announced plans to roll out “shared endorsements,” which will augment its own advertisements with information from users who rated, reviewed, or gave a +1 to the service or location in question.

The move echoes Facebook’s “sponsored stories,” where the social network started turning users’ likes or check-ins into ads on its site, all without asking permission or even notifying them. A public outcry, class-action lawsuit, $20 million settlement, and limitations on the use of users’ content followed.

Google revealed its shared endorsements scheme in a change to its terms of service. The updates state that going forward, friends, family, “and others” may see a user’s Google profile name, photo, and any endorsement they’ve created for a company alongside ads for that company. For instance, if Jon Brodkin gave the new Nickelback album 5 stars and said in a review that the band is “the voice of our generation,” the next time someone searches for that album, Brodkin’s rating and praise could appear alongside a sponsored placement.

Google specifies that the privacy scope of the content you share will affect how it is used in an endorsement. If Brodkin shares that favorable Nickelback review to only his Google+ circle of Friends, only they will see it. If he shares it publicly, which is the default sharing setting on Google content, anyone will see it.

Users are opted in to Google’s new scheme by default. In the past, Google gave itself permission to use users’ +1s alongside advertisements unless the user specifically opted out. The new “shared endorsements” are an extension of that setting, wherein Google gives itself permission to take even more of a user’s content and place it alongside ads.

To opt out of being a shared endorsement, Google users must go to the “shared endorsement” settings page, which is currently not linked anywhere from either their Google+ account or privacy settings (the ads have yet to go into effect, so Google may be waiting to integrate the page until the feature is live). At the bottom of the page is a checkbox next to the phrase “Based upon my activity, Google may show my name and profile photo in shared endorsements that appear in ads.”

If a user previously opted out of Google using their +1s in ads, this box will appear unchecked. For anyone else, it will be checked by default.

Enlarge / The opt-out checkbox, located here.

As the New York Times pointed out, Google would not specify whether advertisers will choose to have shared endorsements on their ads or not.

The handful of Google users’ reactions on Google’s profile on relevant posts range from neutral (because of the opt-out) to disappointed or indignant. “Very Facebookesque. Puts you off reviewing anything,” said Michael Sharp.

“Does the user get a cut of the ad value? ” said Matt Paonessa.

“For those disliking the idea of your reviews going (more) public, answer me, why else are you making reviews?” asked Petr Faitl.

Google is going about including its users in advertising in a slightly better way than Facebook did, making a semi-public announcement and providing a way to opt out. The real problem is that the majority of users simply don’t pay attention to these sorts of things.

Google is able to put changes to its services front and center if it chooses. It did so with the massive overhaul of its terms of service and privacy policy back in January 2012 by placing an alert both under the search box on its homepage and on search results pages. This shared endorsements change flies a bit further under the radar.

If you’re uncomfortable with the idea of your content being used in advertisements, even if it’s just certain types—no one has time to go back through their profile and groom out posts set to public that weren’t expected to be used in ads—there is a window of opportunity to avoid it altogether. Google has already opened the opt-out page and will not roll out shared endorsements until November 11.


Facebook to rip search opt-out from under those who were using it

Courtesy Casey Johnston, ArsTechnica

Here’s the dialog you’ll see if you were opted out of search, when Facebook gets around to opting you back in.

Facebook

If you checked that box saying you don’t want to appear in Facebook search results, get ready: soon, that choice is going away. Facebook announced in a blog post Thursday that it’s removing the ability to opt out of appearing in search results, both for friends and globally, for those who’ve had it enabled.

Facebook actually removed the search opt-out for everyone who didn’t have it enabled early this year, around the time it introduced Graph Search. Now, ten months later, Facebook is giving the boot to anyone who actually cared enough to opt out, referring to the checkbox as an “old search setting.” Facebook claims that less than one percent of users were taking advantage of the feature.

In simpler times, Facebook was smaller and easier to navigate, and everyone had a privacy setting asking “Who can look up your timeline by name?” Now that there are so many profiles that users become confused when they know they have a friend or know someone in a group, but try to find them by search and they don’t appear, says Facebook.

The shifting sands of Facebook privacy settings have become increasingly unreliable; of course Facebook is not beholden to any of its users to protect them from much of anything, and anyone who doesn’t like what Facebook is doing can leave. ReadWrite has a good run-through of the privacy settings you may want to survey and tweak. While they still exist, that is.


Microsoft ponies up $100K to researcher who figured out new Windows hack in 2 weeks

Courtesy Gregg Keizer, Computerworld

The security researcher who yesterday was awarded $100,000 by Microsoft spent about two weeks pondering, then demonstrating a new way to circumvent Windows’ defensive technologies.

In an interview today, James Forshaw, the head of vulnerability research at U.K.-based Context Information Security, described in the most general terms the work that resulted in the big bounty.

"When Microsoft announced the initial bounties, I first thought about the mitigations I wanted to go over." said Forshaw. "Windows has a lot of mitigating in place, so I started to brainstorm. I asked myself, ‘How would I do it [if I was a cyber criminal]?’"

From start to finish — from those brainstorming sessions to an exploit that proved his mitigation bypass approach worked — Forshaw said he spent about half a month on the project. "From my initial thought to a full working proof of concept was about two weeks," he said.

Forshaw stressed that the two weeks of solid work were atop the years he’s spent in information security, hammering home the point that winning submissions, whether for a bonus program like Microsoft’s or those that browser makers and other vendors run to collect details on specific vulnerabilities, almost always goes to very experienced, long-time researchers.

"This is not something that anyone’s done before, but then again, nothing is completely revolutionary," said Forshaw.

Microsoft echoed that yesterday. In a Tuesday blog post, Katie Moussouris, a senior security strategist with the Microsoft Security Response Center (MSRC), and the manager of the bounty programs, said that a Microsoft engineer had independently found a variant of the attack technique class that Forshaw reported.

"But James’ submission was of such high quality and outlined some other variants such that we wanted to award him the full $100,000 bounty," wrote Moussouris.

Read More…Here


The NSA’s “Problem From Hell”

back door graphic

Courtesy Tom Simonite, MIT TechnologyReview

In 2011, General Michael Hayden, who had earlier been director of both the National Security Agency and the Central Intelligence Agency, described the idea of computer hardware with hidden “backdoors” planted by an enemy as “the problem from hell.” This month, news reports based on leaked documents said that the NSA itself has used that tactic, working with U.S. companies to insert secret backdoors into chips and other hardware to aid its surveillance efforts.

That revelation particularly concerned security experts because Hayden’s assessment is widely held to be true. Compromised hardware is difficult, and often impossible, to detect. Hardware can do things such as access data in ways invisible to the software on a computer, even security software. The possibility that computer hardware in use around the world might be littered with NSA backdoors raises the prospect that other nations’ agencies are doing the same thing, or that groups other than the NSA might find and exploit the NSA’s backdoors. Critics of the NSA say the untraceable nature of hardware flaws, and the potential for building them into many systems, also increases the risk that intelligence agencies that place them will be tempted to exceed legal restrictions on surveillance.

“Hardware is like a public good because everybody has to rely on it,” says Simha Sethumadhavan, an associate professor at Columbia University who researches ways to detect backdoors in computer chips. “If hardware is compromised in some way, you lose security in a very fundamental way.”

Despite a few allegations against various governments, there are no publicly confirmed cases of backdoors in computer hardware being deployed. However, in recent years security researchers have repeatedly demonstrated the power and stealth of compromised hardware, mostly by embedding backdoors into the firmware of PC components. One presentation at the Black Hat security conference last year showed off a way to backdoor a new PC so that even switching the hard drive won’t close the door (see “A Computer Infection That Can Never Be Cured”).

Read More…http://www.technologyreview.com/news/519661/nsas-own-hardware-backdoors-may-still-be-a-problem-from-hell/


Gotta Love It: Government Can’t Decrypt Tor Browser That It Created

Courtesy Wayne Rash, eWeek

There’s a saying about the left hand not knowing what the right had is doing. Nothing illustrates this more clearly than the federal government’s dysfunctional relationship with the Tor browser and the onion router. By now, you’re heard that the National Security Agency is having a tough time unraveling Tor. This bundle of software based on the Firefox browser enables a process in which Internet traffic is routed among a series of routers, each of which adds a layer of encryption and anonymity as it happens. The Tor browser is freely available to anyone who wants to use it, including dissidents in nations with oppressive governments and even child abusers. The problem with Tor from the NSA’s viewpoint is that it works too well. Actually nailing down who’s using it, decrypting what they’re doing, and doing all of that in a timely fashion is driving the NSA crazy. So, naturally, you have to ask yourself what band of privacy advocates dreamed up this nearly uncrackable communications pathway? The answer may surprise you. Tor is the brainchild of the U.S. government. In fact, Tor was invented with the support of the U.S. Naval Research Laboratory, located near Washington, D.C., in suburban Maryland, just inside the Beltway. And yes, this is pretty close to the NSA, which is also located in suburban Maryland, although it’s outside the Beltway. –

Read More…http://www.eweek.com/security/tor-puts-nsa-at-odds-with-browsers-us-navy-creators-other-agencies.html?google_editors_picks=true#sthash.PriGvdAb.dpuf


You Guys Really Need to Watch “WarGames” Again

Courtesy Joshua Foust, DefenseOne.com

The United Nations has objected to autonomous drones like the Navy's X-47B, seen here successfully completing its historic first flight at Edwards Air Force Base, Calif., in 2011.

Scientists, engineers and policymakers are all figuring out ways drones can be used better and more smartly, more precise and less damaging to civilians, with longer range and better staying power. One method under development is by increasing autonomy on the drone itself.

Eventually, drones may have the technical ability to make even lethal decisions autonomously: to respond to a programmed set of inputs, select a target and fire their weapons without a human reviewing or checking the result. Yet the idea of the U.S. military deploying a lethal autonomous robot, or LAR, is sparking controversy. Though autonomy might address some of the current downsides of how drones are used, they introduce new downsides policymakers are only just learning to grapple with.

The basic conceit behind a LAR is that it can outperform and outthink a human operator. "If a drone’s system is sophisticated enough, it could be less emotional, more selective and able to provide force in a way that achieves a tactical objective with the least harm," said Purdue University Professor Samuel Liles. "A lethal autonomous robot can aim better, target better, select better, and in general be a better asset with the linked ISR [intelligence, surveillance, and reconnaissance] packages it can run."

Read More…

http://www.defenseone.com/technology/2013/10/ready-lethal-autonomous-robot-drones/71492/