Courtesy Jeremy Kirk, IDG News Service
Network Solutions said Wednesday it has restored services after a distributed denial-of-service (DDoS) attack knocked some websites it hosts offline for a few hours.
The company, which is owned by Web.com, registers domain names, offers hosting services, sells SSL certificates and provides other website-related administration services.
Network Solutions wrote on Facebook around mid-day Wednesday EDT that it was under attack. About three hours later, it said most customer websites should resolve normally.
Some customers commented on Facebook, however, that they were still experiencing downtime. Many suggested a problem with Network Solutions’ DNS (Domain Name System) servers, which are used to look up domain names and translate the names into an IP addresses that can be requested by a browser.
DDoS attacks are a favored method to disrupt websites and involve sending large amounts of data in hopes of overwhelming servers and causing websites to not respond to requests.
Focusing DDoS attacks on DNS servers has proven to be a very effective attack method. In early June, three domain name management and hosting providers — DNSimple, easyDNS and TPP Wholesale — reported DNS-related outages caused by DDoS attacks.
Hosting service DNSimple said it came under a DNS reflection attack, where DNS queries are sent to one party but the response is directed to another network, exhausting the victim network’s bandwidth.
Courtesy Dan Goodin, Ars Technica
Spanish authorities have arrested a 35-year-old Dutchman they say is “suspected of unprecedented heavy attacks” on Spamhaus, the international group that helps network owners around the world block spam.
A press release (English translation here) issued by the Dutch Public Prosecutor Service identified the suspect only by the initials SK and said he was living in Barcelona. A variety of circumstantial evidence, mostly taken from this Facebook profile, strongly suggests the suspect is one Sven Olaf Kamphuis. He’s the man quoted in a March 26 New York Times article saying a Dutch hosting company called CyberBunker, which Kamphuis is affiliated with, was behind distributed denial-of-service attacks aimed at Spamhaus. Kamphuis later denied he or CyberBunker had anything to do with the attacks.
With peaks of 300 gigabits per second, the March attacks were among the biggest ever recorded. Besides their size, they were also notable because they attacked the London Internet Exchange, a regional hub where multiple networks from different service providers connect. As Ars writer Peter Bright explained, the size and technique threatened to clog up the Internet’s core infrastructure and make access to the rest of the Internet slow or impossible. While some critics said that assessment was overblown, Bright provided this follow-up explaining why the attacks had the potential to break key parts of the Internet.
The crippling distributed denial-of-service (DDoS) attacks began a few weeks after Spamhaus added CyberBunker to one of the real-time blacklists that ISPs use to block e-mail from networks suspected of engaging in, or at least turning a blind eye to, the sending of spam. CyberBunker has long been known as an “anything goes” host provider. As long as content isn’t “child porn and anything related to terrorism,” the company permits it. According to an article published Friday by KrebsOnSecurity, Spamhaus officials contacted CyberBunker after seeing botnet controllers and illegal pharmaceutical operators hosted on its service. “We got a rude reply back, and he made claims about being his own independent country in the republic of CyberBunker, and said he was not bound by any laws and whatnot,” an unnamed Spamhaus official told reporter Brian Krebs. Kamphuis’ Facebook page has also claimed he has diplomatic immunity.
It should be emphasized that so far there is no official confirmation that the SK in custody is Kamphuis. Even if it is, he should still be presumed innocent until proven otherwise. Dutch prosecutors said SK’s Barcelona residence was searched and computers, data carriers, and mobile phones were seized. They also said they expect SK to be transferred to Dutch authorities soon. We’ll be eager to learn if there’s evidence that can conclusively tie the man to one of the biggest reported DDoS attacks ever.
Courtesy Ars Technica
A little more than a year ago, details emerged about an effort by some members of the hacktivist group Anonymous to build a new weapon to replace their aging denial-of-service arsenal. The new weapon would use the Internet’s Domain Name Service as a force-multiplier to bring the servers of those who offended the group to their metaphorical knees. Around the same time, an alleged plan for an Anonymous operation, "Operation Global Blackout" (later dismissed by some security experts and Anonymous members as a "massive troll"), sought to use the DNS service against the very core of the Internet itself in protest against the Stop Online Piracy Act.
This week, an attack using the technique proposed for use in that attack tool and operation—both of which failed to materialize—was at the heart of an ongoing denial-of-service assault on Spamhaus, the anti-spam clearing house organization. And while it hasn’t brought the Internet itself down, it has caused major slowdowns in the Internet’s core networks.
DNS Amplification (or DNS Reflection) remains possible after years of security expert warnings. Its power is a testament to how hard it is to get organizations to make simple changes that would prevent even recognized threats. Some network providers have made tweaks that prevent botnets or "volunteer" systems within their networks to stage such attacks. But thanks to public cloud services, "bulletproof" hosting services, and other services that allow attackers to spawn and then reap hundreds of attacking systems, DNS amplification attacks can still be launched at the whim of a deep-pocketed attacker—like, for example, the cyber-criminals running the spam networks that Spamhaus tries to shut down.
The Domain Name Service is the Internet’s directory assistance line. It allows computers to get the numerical Internet Protocol (IP) address for a remote server or other network-attached device based on its human-readable host and domain name. DNS is organized in a hierarchy; each top-level domain name (such as .com, .edu, .gov, .net, and so on) has a "root" DNS server keeping a list of each of the "authoritative" DNS servers for each domain registered with them. If you’ve ever bought a domain through a domain registrar, you’ve created (either directly or indirectly) an authoritative DNS address for that domain by selecting the primary and secondary DNS servers that go with it.
When you type "arstechnica.com" into your browser’s address bar and hit the return key, your browser checks with a DNS resolver—your personal Internet 411 service— to determine where to send the Web request. For some requests, the resolver may be on your PC. (For example, this happens if you’ve requested a host name that’s in a local "hosts" table for servers within your network, or one that’s stored in your computer’s local cache of DNS addresses you’ve already looked up.) But if it’s the first time you’ve tried to connect to a computer by its host and domain name, the resolver for the request is probably running on the DNS server configured for your network—within your corporate network, at an Internet provider, or through a public DNS service such as Google’s Public DNS.
There are two ways for a resolver to get the authoritative IP address for a domain name that isn’t in its cache: an iterative request and a recursive request. In an iterative request, the resolver pings the top-level domain’s DNS servers for the authoritative DNS for the destination domain, then it sends a DNS request for the full hostname to that authoritative server. If the computer that the request is seeking is in a subdomain or "zone" within a larger domain—such as http://www.subdomain.domain.com—it may tell the resolver to go ask that zone’s DNS server. The resolver "iterates" the request down through the hierarchy of DNS servers until it gets an answer.
But on some networks, the DNS resolver closest to the requesting application doesn’t handle all that work. Instead, it sends a "recursive" request to the next DNS server up and lets that server handle all of the walking through the DNS hierarchy for it. Once all the data is collected from the root, domain, and subdomain DNS servers for the requested address, the resolver then pumps the answer back to its client.
How DNS queries are supposed to work—when they’re not being used as weapons.
To save time, DNS requests don’t use the "three-way handshake" of the Transmission Control Protocol (TCP) to make all these queries. Instead, DNS typically uses the User Datagram Protocol (UDP)—a "connectionless" protocol that lets the server fire and forget requests.
Pump up the volume
That makes the sending of requests and responses quicker—but it also opens up a door to abuse of DNS that DNS amplification uses to wreak havoc on a target. All the attacker has to do is find a DNS server open to requests from any client and send it requests forged as being from the target of the attack. And there are millions of them.
The "amplification" in DNS amplification attacks comes from the size of those responses. While a DNS lookup request itself is fairly small, the resulting response of a recursive DNS lookup can be much larger. A relatively small number of attacking systems sending a trickle of forged UDP packets to open DNS servers can result in a firehose of data being blasted at the attackers’ victim.
DNS amplification attacks wouldn’t be nearly as amplified if it weren’t for the "open" DNS servers they use to fuel the attacks. These servers have been configured (or misconfigured) to answer queries from addresses outside of their network. The volume of traffic that can be generated by such open DNS servers is huge. Last year, Ars reported on a paper presented by Randal Vaughan of Baylor University and Israeli security consultant Gadi Evron at the 2006 DefCon security conference. The authors documented a series of DNS amplification attacks in late 2005 and early 2006 that generated massive traffic loads for the routers of their victims. In one case, the traffic was "as high as 10Gbps and used as many as 140,000 exploited name servers," Vaughan and Evron reported. "A DNS query consisting of a 60 byte request can be answered with responses of over 4000 bytes, amplifying the response packet by a factor of 60."
But even if you can’t find an open DNS server to blast recursive responses from, you can still depend on the heart of the Internet for a respectable hail of packet projectiles. A "root hint" request—sending a request for name servers for the "." domain—results in a response 20 times larger than the packet the request came in. That’s in part thanks to DNS-SEC, the standard adopted to make it harder to spoof DNS responses, since now the response includes certificate data from the responding server.
A comparison of a "root hint" query and the response delivered by the DNS server. Not all data shown.
In the case of the attack on Spamhaus, the organization was able to turn to the content delivery network CloudFlare for help. CloudFlare hid Spamhaus behind its CDN, which uses the Anycast feature of the Border Gateway Protocol to cause packets destined for the antispam provider’s site to be routed to the closest CloudFlare point of presence. This spread out the volume of the attack. And CloudFlare was able to then shut off amplified attacks aimed at Spamhaus with routing filters that blocked aggregated DNS responses matching the pattern of the attack.
But that traffic still had to get to Cloudflare before it could be blocked. And that resulted in a traffic jam in the core of the Internet, slowing connections for the Internet as a whole.
No fix on the horizon
The simplest way to prevent DNS amplification and reflection attacks would be to prevent forged DNS requests from being sent along in the first place. But that "simple" fix isn’t exactly easy—or at least easy to get everyone who needs to participate to do.
There’s been a proposal on the books to fix the problem for nearly 13 years—the Internet Engineering Task Force’s BCP 38, an approach to "ingress filtering" of packets. First pitched in
2000 1998 as part of RFC 2267 , the proposal has gone nowhere. And while the problem would be greatly reduced if zone and domain DNS servers simply were configured not to return recursive or even "root hint" responses received from outside their own networks, that would require action by the owners of the network. It’s an action that doesn’t have a direct monetary or security benefit to them associated with it.
ISPs generally do "egress filtering"—they check outbound traffic to make sure it’s coming from IP addresses within their network. This prevents them from filling up their peering connections with bad traffic. But "ingress" filtering would check to make sure that requests coming in through a router were coming from the proper direction based on their advertised IP source.
Another possible solution that would eliminate the problem entirely is to make DNS use TCP for everything—reducing the risk of forged packets. DNS already uses TCP for tasks like zone transfers. But that would require a change to DNS itself, so it’s unlikely that would ever happen, considering that you can’t even convince people to properly configure their DNS servers to begin with.
Maybe the attack on Spamhaus will change that, and core network providers will move to do more to filter DNS traffic that doesn’t seem to match up with known DNS servers. Maybe just maybe, BCP 38 will get some traction. And
Courtesy Ars Technica
On March 28, American Express’ website went offline for at least two hours during a distributed denial of service attack. A group calling itself "the cyber-fighters of Izz ad-Din al-Qassam" claimed responsibility for the attack, which began at about 3:00pm Eastern Time.
In a statement, an American Express spokesperson said, "Our site experienced a distributed-denial-of-service (DDoS) attack for about two hours on Thursday afternoon…We experienced intermittent slowing on our website that would have disrupted customers’ ability to access their account information. We had a plan in place to defend against a potential attack and have taken steps to minimize ongoing customer impact."
The American Express DDoS is part of a new wave of attacks started two weeks ago by the Izz ad-Din al-Qassam group, which launched a larger campaign targeting US financial institutions that began last September. The group’s alleged goal is to force the take-down of an offensive YouTube video—or extract an ongoing price from American banks as long as the video stays up, which could be indefinitely.
These attacks are also part of a larger trend of disruptive and destructive attacks on financial institutions by apparently politically-motivated groups, the most damaging of which was the attack on South Korean banks and other companies last week. It’s a trend that has surprised some security analysts, considering that the financial industry has focused more on advanced persistent threat (APT) attacks and cyber-espionage in recent years.
Band of the Hand
Named after a Muslim cleric who led The Black Hand, an anti-British and anti-Zionist jihadist organization in the 1920s and 1930s, and sharing a name with the military wing of Hamas (which the group’s statements claim it is tied to), Izz ad-Din al-Qassam has taken credit for a variety of attacks on US financial institutions over the past year, all allegedly in protest against the posting of trailers for the film The Innocence of Muslims on YouTube. Until the film is removed, the group said it would target "properties of American-Zionist Capitalists…This attack will continue till the Erasing of that nasty movie." [sic]
Unlike DDoS attacks waged by Anonymous in the past, the Izz ad-Din al-Qassam group has used scripts running on compromised Web servers to launch their attacks rather than "volunteer" desktop PCs or botnets of compromised Windows machines. That allows attacks to leverage larger amounts of available bandwidth.
So far, there have been three distinct phases of the group’s attacks. Dan Holden, director of Arbor Networks’ Security Engineering & Response Team, told Ars in a phone interview that the previous two waves lasted between three and four weeks, with the group then taking a break—likely to do the work required to maintain their botnet of compromised servers and add to it as their existing bots are discovered and disabled.
And during the course of each attack phase, the group has been refining its attacks, as Ars’ Dan Goodin reported earlier this year. In January, security firm Incapsula found a new variant of the group’s attack tools, which spawned additional copies of itself on compromised servers to multiply the size of attacks.
There have been further refinements made to this approach in this latest wave, Holden said. "The biggest change is the maintenance and the growth in the botnet," he explained. "There has been a big investment on their part to keep the campaign growing. And they’ve added some twists and techniques to their tools as time goes on, focusing their attacks more on the particular applications of the banks they’re targeting. Now there are particular tools being used for a specific set of banks."
That refinement is the result of months of analyzing the websites of each of the banks that Izz ad-Din al-Qassam has targeted. Holden said that during its past large-scale attacks the group also crawled the websites of its targets and used the intelligence collected during the attacks to learn more about their weaknesses.
While the Izz ad-Din al-Qassam group’s attacks are apparently purely to disrupt banks’ ability to do business, there is some concern that such denial-of-service attacks could be used as a cover for fraud activity by criminals operating botnets or using targeted attacks on banks to gain access to internal systems.
"Financial institutions are putting a lot of resources into countering DoS attacks," said George Tubin, senior security strategist at Trusteer, a firm that specializes in countering online financial fraud. "But what we have seen in the past is the use of DoS attacks to conceal a fraud attack. They create the perfect cover." While the banks’ security resources are focused on trying to counter the DoS attack, he said, criminals could use other vectors to gain access to accounts and perform transactions in the background before they can be detected.
That’s not to say that there’s necessarily any collusion between the DoS attackers and any potential fraudsters, Tubin emphasized, although it was possible. "They could be coordinated, but they are also frequent enough and common enough that criminals could do their own targeted attack once they see a DoS on an institution."
And those targeted attacks are becoming increasingly costly to banks. An FBI fraud alert last September revealed that attackers had compromised several financial institutions by infecting the computers of employees with malware—including keyloggers and remote control software that allowed them to capture employees’ passwords, access customers’ accounts and make wire transfers ranging from $400,000 to $900,000.
A well-funded attack
Still, Holden said that it’s unlikely that criminals are "coat-tailing" on the Izz ad-Din al-Qassam group’s attacks just yet. "It would have to be one of the incidences where the attackers can tell the site is down, [but then they] wouldn’t be able to get in anyhow. So it’s not as likely."
But even if the group behind the attacks isn’t profiting from them, Holden said it’s clear that there are very real investments being made in their activities—maybe not in servers or hard assets, but in the form of countless hours of maintenance of the botnet by finding new servers to exploit, and further development of attacks.
"Regardless of who’s behind this," Holden said, "it has to be funded at some level. Even if it’s hacktivists, it’s got to be funded hacktivism." That, he says, is because of both the amount of time dedicated to the attack, and to its ongoing refinement. "It’s not that these are the most sophisticated things in the world," he explained, "but it has been getting more sophisticated, and it’s growing."
The goal of the investment in the botnet hasn’t been to create the sort of massive DDoS launched on Spamhaus this week. Rather, Holden said, the goal seems to have "mainly been around being able to attack multiple targets. They’re not interested in the biggest DDoS they can make—they’re more interested in creating constant pressure to prove whatever they’re trying to prove. They’re in it for the long haul."
Courtesy Ars Technica
Last week, anti-spam organization Spamhaus became the victim of a large denial of service attack, intended to knock it offline and put an end to its spam-blocking service. By using the services of CloudFlare, a company that provides protection and acceleration of any website, Spamhaus was able to weather the storm and stay online with a minimum of service disruptions.
Since then, the attacks have grown to more than 300 Gb/s of flood traffic: a scale that’s threatening to clog up the Internet’s core infrastructure and make access to the rest of the Internet slow or impossible.
It now seems that the attack is being orchestrated by a Dutch hosting company called CyberBunker. CyberBunker specializes in "anything goes" hosting, using servers in a former nuclear bunker (hence the name). As long as it’s not "child porn and anything related to terrorism," CyberBunker will host it. This includes sending spam.
Spamhaus blacklisted CyberBunker earlier in the month. A CyberBunker spokesman, Sven Olaf Kamphuis, told the New York Times that CyberBunker was fighting back against Spamhaus because the anti-spam organization was "abusing [its] influence."
When the attack started, on March 18, it measured around 10 Gb/s. On March 19, it hit 90 Gb/s, on March 22 it reached 120 Gb/s. This still wasn’t enough to knock CloudFlare or Spamhaus offline. So the attackers escalated.
Today, CloudFlare wrote that one of the Internet’s big bandwidth providers is seeing 300 gigabits per second of traffic related to this attack, making it one of the largest ever reported.
This is bad news for the Internet. 300 Gb/s is the kind of scale that threatens the core routers that join the Internet’s disparate networks.
As Ars wrote last week, CloudFlare uses a technique called anycast to distribute traffic to nearby servers. This greatly diffuses the potency of DDoS attacks, by preventing the attackers from focusing their traffic on a single system on the Internet. Instead, the attack traffic all gets directed to a nearby machine—one of CloudFlare’s geographically distributed mirrors. A sufficient flood of traffic could still knock one of those local mirrors offline, but the impact of that should be relatively restricted, with users throughout the rest of the world unaffected.
Once an attack has been detected, the companies that CloudFlare buys bandwidth from—known as "Tier 2" providers—can then block the traffic to prevent it from entering their networks. That doesn’t stop the problem, however; it just moves it upstream.
Tier 2 providers buy their bandwidth from the small number of Tier 1 providers. Tier 1 providers work a bit differently than Tier 2. They don’t buy bandwidth from anyone. Instead, they just connect to other Tier 1 providers for free. These Tier 1 providers are the high-speed backbone that joins all the Tier 2 providers together, and hence makes the Internet a single global network, rather than a bunch of separate networks.
If a Tier 1 provider fails, that risks breaking the entire Internet.
Though the Tier 2 providers are blocking the flood traffic, the Tier 1 providers are still carrying it. As the DDoS attack has grown, so too has this load. The 300 Gb/s figure came from one of these Tier 1 providers. CloudFlare says that several of the Tier 1 networks have started to become congested, particularly in Europe. This congestion can make the entire Internet slower for everyone.
This has been particularly significant in London. Dotted around the globe are a number of "Internet Exchanges" (IXs). These are places where multiple networks from different service providers connect to each other. The London Internet Exchange (LINX), through which an average of about a terabit of traffic passes each second, suffered a substantial outage on March 23. At peak time, its traffic dropped from about 1.5 Tb to around half that.
The LINX team has subsequently changed some aspects of their network configuration to make their systems more robust against this kind of large scale attack, and normal service was resumed a little over an hour after the first attack.
The fundamental problem, however, remains. The traffic is being generated primarily from DNS amplification attacks. Small requests are sent to DNS servers, generating responses from those servers that are about 50-100 times larger. The sending address of these requests are spoofed, so the DNS servers think that they originated not from the attacker’s machine but from the victim’s machine; accordingly, the large responses are sent to that victim, overwhelming it with traffic.
To perform these attacks, the attackers need servers that are open to anyone (and arguably misconfigured). The Open DNS Resolver Project reports that there are about 25 million of these open DNS servers, and hence 25 million servers that can be used to generate enormous quantities of traffic. Making this worse is the fact that, unlike DDoS attacks using home PCs, these DNS servers typically have fast Internet connections.
The number of open DNS resolvers is dropping—CloudFlare reported that it was down by about 30 percent in February—but they’re still abundant, and as the current attacks on SpamHaus make clear, still enough to be tremendously problematic.
To guard against these attacks in future, the open DNS servers need to be reconfigured in some way (to either restrict the IP addresses that can use them, or limit the number of queries they’ll respond to, or both), and networks need to be reconfigured so that they won’t send traffic with spoofed sender addresses.
Both of these fixes are well-known, and the problems have long been acknowledged. However, they require coordinated action from many parties: every DNS server operator and every ISP needs to do the reconfiguration work.
As for CyberBunker, the company boasts that although "Dutch authorities and the police have made several attempts to enter the bunker by force, none of these attempts were successful." Even a Dutch SWAT team failed to get in. CyberBunker argues that it is currently engaged in a blackmail war with Spamhaus. As Internet wars go, this one is using the nuclear option, and everyone is at risk of being caught in the blast.
Courtesy Ars Technica
According to Richards’ tweets and a personal blog post titled "Forking and Dongle Jokes Don’t Belong at Tech Conferences," the developers in question were sitting right behind her at PyCon and made "jokes about forking repo’s in a sexual way and ‘big’ dongles." (Richards herself apparently made a joke about male genitalia on Twitter, using her work account, several days before PyCon.)
Richards did not tell the developers that she was offended, saying in the comments on her blog post, "I didn’t want to be heckled or have my experience denied." She initially decided to remain silent about the jokes, she wrote, until she saw up on the main stage a photo "of a little girl who had been in the Young Coders workshop." At that moment, "I realized I had to do something or she would never have the chance to learn and love programming because the ass clowns behind me would make it impossible for her to do so," she wrote.
After tweeting the pictures of the developers, she then wrote on Twitter, "Can someone talk to these guys about their conduct? I’m in lightning talks, top right near stage, 10 rows back #pycon." She also linked to PyCon’s code of conduct, which says in part that "offensive jokes are not appropriate for PyCon." On her blog, Richards concluded, "Yesterday the future of programming was on the line and I made myself heard."
PyCon staff quickly spoke to the developers, who apologized, but the staff took no further action. PyCon described its response in a blog post yesterday:
On Sunday March 17th an incident occurred involving some inappropriate comments made during a crowded plenary session. Per the stated guidelines for attendees and staff, the issue was reported to the PyCon 2013 staff and resolved privately.
Both parties were met with, in private. The comments that were made were in poor taste, and individuals involved agreed, apologized, and no further actions were taken by the staff of PyCon 2013. No individuals were removed from the conference, no sanctions were levied.
PyCon values the privacy of all attendees above everything. Except in cases wherein law enforcement must or could be involved, all reports and actions taken are kept confidential by PyCon staff.
After the incident, PyCon updated its "Attendee Procedure for Handling Harassment" to encourage conference attendees to report such behavior privately. "Public shaming can be counter-productive to building a strong community. PyCon does not condone nor participate in such actions out of respect," the updated PyCon procedures say.
One of the developers was also fired by his employer, PlayHaven, which said the inappropriate comments were contrary to the company’s dedication to gender equality. PlayHaven makes a financial platform for monetizing mobile games.
“Forking” and firings
A person going under the Hacker News name "mr-hank" claimed that he was the developer who was fired. He apologized for the jokes but said Richards misinterpreted the "forking" statement. The developers were discussing the process of forking code bases, not making sexual jokes, he said. "While I did make a big dongle joke about a fictional piece [of] hardware that identified as male, no sexual jokes were made about forking," he wrote, adding:
She gave me no warning, she smiled while she snapped the pic and sealed my fate. Let this serve as a message to everyone, our actions and words, big or small, can have a serious impact. I will be at pycon 2014, I will joke and socialize with everyone but I will also be mindful of my audience, accidental or otherwise. Again, I apologize.
The other developer making jokes, Alex Reid, was not fired.
PlayHaven CEO Andy Yang acknowledged firing the developer, writing, "PlayHaven had an employee who was identified as making inappropriate comments at PyCon, and as a company that is dedicated to gender equality and values honorable behavior, we conducted a thorough investigation. The result of this investigation led to the unfortunate outcome of having to let this employee go. We value and protect the privacy of our employees, both past and present, and we will not comment on all the factors that contributed to our parting ways… We believe in the importance of discussing sensitive topics such as gender and conduct and we hope to move forward with a civil dialogue based on the facts."
Richards’ own blog was subsequently hit by a DDoS attack. VentureBeat also reports that Richards has been the target of rape and death threats on Twitter, although such tweets have apparently been deleted by Twitter.
The SendGrid website has also been unresponsive much of the day. The company has not confirmed the cause, although it’s assumed to be a DDoS attack. (Hacktivist group Anonymous is supposedly behind it.) "Our SMTP and Web servers are currently down," SendGrid wrote in a status update. A few hours later, the company provided some good news: "We’re now accepting mail via SMTP & Web API. The Website is up, and we’re working to finalize this resolution." Business customers have complained about the outage impacting their operations.
Today, SendGrid fired Richards. In a Facebook post, SendGrid wrote, "Effective immediately, SendGrid has terminated the employment of Adria Richards. While we generally are sensitive and confidential with respect to employee matters, the situation has taken on a public nature. We have taken action that we believe is in the overall best interests of SendGrid, its employees, and our customers."
A further post from SendGrid CEO Jim Franklin a few hours later expanded on the company’s reasons for firing Richards. "SendGrid supports the right to report inappropriate behavior, whenever and wherever it occurs," Franklin wrote. "What we do not support was how she reported the conduct. Her decision to tweet the comments and photographs of the people who made the comments crossed the line. Publicly shaming the offenders—and bystanders—was not the appropriate way to handle the situation.
"A SendGrid developer evangelist’s responsibility is to build and strengthen our Developer Community across the globe. In light of the events over the last 48+ hours, it has become obvious that her actions have strongly divided the same community she was supposed to unite. As a result, she can no longer be effective in her role at SendGrid."