Category Archives: spam

Bozer and his Bulgarians: DDOS?

This is an open thread. But I want to tell people what’s been up on the back end of the blog.

Have some of you seen internal error pages? Been presented the “scary page” after commenting? That was my fault. Thanks for emailing me.

A variety of temporary glitches were introduced as I edited both the ‘.htaccess’ and the ‘zbblock’ files that I use to deal with bots. I did something of a ‘re-organize’.

Why?I’ve been dealing with what appears to have been a fairly low-tech, slow, d-semi-dos; that is: it seemed to be ‘distributed’ and but seemed to only semi-deny service. So “distributed-semi-denial-of-service. The main features were/are

  • A specific old post was requested roughly every two minutes. (The request rate is now reduced.)
  • The overwhelming majority of requests come from thousands of different IPs associated with server farms or some sort. Of the remaining, most requests come from countries with reputations of hosting quite a bit of hacking/spamming. A small amount does come from connection providing ISPs in ‘mostly clean’ countries.
  • Some of the same IPs would return and ask for the post after a period of an hour or so. These ‘slow’ returns any individual IP makes it difficult for an upstream CDN like Cloudflare to detect its undesired traffic.
  • There were/are features that make this seem like it might be ‘personal’ (in some sense) rather than just “garden variety script-kiddy bots who just look for low hanging fruit”. That said, it’s really hard to say.
  • During one period, at least made several requests that revealed x-forward headers which included a second IP: 198.50.228.116. This IP is at AS16276 OVH and whois tells me it’s a “Private Customer” in “Sofia, BG”. So, I’ve nicknamed the person (or group of person) hitting the blog “Bozer and his Bulgarians”. (This is not to say that I actually suspect this originates in Bulgaria, but who knows.)

Although this behavior was not going to take down the server, it was pesky.

But I also decided the pesky behavior could be turned into an episode of using lemons to make lemonade: it gave me a chance to identify all the new ‘pesky servers’ and update my list of ‘currently active bad servers’. I hadn’t done that in… oh…. a year or two. So, it needed doing.

Because if the features of these connections, I also changed strategy for dealing with these IPs. Previously I dealt with all the proxyIPs in ZBblock and ultimately banned some at Cloudflare. I now detect a sizable number of pesky IP ranges and do
RewriteRule ^(.*) http://%{REMOTE_ADDR}/ [L]
in htaccess.

This rewrite rule sends the request back to its originating IP. The current rules are likely over inclusive, and I’ll be backing some off over time. If you or someone you know runs across it, you (or they) will be told you are having trouble connecting to the site and your IP will be displayed to you.

A few countries are also currently blocked at Cloudflare. Some of these countries will be unblocked by the end of the week, others will not. (China will never be unblocked. Sorry.)

Of course, somethings still only get blocked in ZBblock. (Last night, people who commented were blocked. Sorry!)

If you see any ‘block’ page, email me. Or tweet me. (Of course, I’m aware if that’s happening to you, you won’t read this post. So, it’s a bit of a catch-22. But if someone else tells you they are encountering the problem, have them email me. )

If someone does ask me to let them through, I will ask them their IPs to help me fix the problem for them– either by opening a wide range or opening up a specific one for individual static IP. If they refuse to provide IP — as some people do, I will be unable to fix the problem. If– for some mysterious reason — they insist on connecting through Tor, a vpn or some server farm, and lecture me on how I should permit them — and everyone else on Tor, vpns, or serverfarms– to do so I will tell them to pound sand. I know perfectly well that they have a non-Tor/VPN/server-farm IP they use to connect to Tor/vpn/serverfarm. If they are too stubborn to use that to read my blog, I’m too stubborn to let them use Tor/vpn/serverfarm.

Likewise: I am probably blocking most rss feeds. I can– over time– open up some of these as I identify which need to be unblocked. However, from my point of view, preventing the D-S-DOS is a higher priority than unblocking feeds. If you the rss feed you prefer is currently blocked and you want it unblocked quickly, you will need to email me, ask me, and– possibly– provide information to help me identify the IP ranges/user agents etc. that particular feed uses. Not to sound too snotty: but if you aren’t willing to do some digging to supply me with information regarding the feed you prefer, it’s not going to get unblocked quickly. There are tons of things hitting the feed, not all are feed readers.

Obviously, as I am currently blogging lightly, I don’t expect to be overwhelmed with request for connection clearing feed etc. But if your university, country, feed etc. has been blocked let me know. If it’s easy to fix, I’ll fix it. If it requires info to fix, I’ll assign you the task of getting the info, and then fix it.

Anyway, for now, the d-semi-dos seems to have slowed down. The effects of its main strategy seem to be neutralized. If it is a person and it is personal, my posting may cause it to change strategies. If it’s just a script-kiddie, it’s taken care of.

Either way, open thread.

NYET to NYET.gif

Ironically, while we’ve all been discussing What constitutes ‘hacking’?, hackers have been trying to deface my site! Or at least that’s my diagnosis. Assuming my diagnosis is correct, likely culprits are script-kiddies going by the name ‘d3b~X’ involved in a site defacement game of some sort. Anyway, this is a typical ‘attack’ (there were many of these a day over several days. It’s stopped now. If these do not– I repeat do not add ‘rankexploits.com’ to any of the uri snippets and load them in a browser.)

I’ll post this both to show some readers the features of something that actually is hacking along but also to alert others who might be seeing suspicious activity learn of this particular hacking attack which exists out ‘in the wild’. I’ll call it the “nyet attack”.

The nyet attack typically start with someone trying to “PUT” an image file on the server:
108.61.14.235 - - [30/May/2014:02:00:26 -0700] "PUT /nyet.gif HTTP/1.1" 405 473 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; de-LI; rv:1.9.0.16) Gecko/2009120208 Firefox/3.0.16 (.NET CLR 3.5.30729)"
HTTP PUT requests are forbidden by my server. (In my opinion, they should be blocked by most servers. Almost no one running a web site wants to let the public “PUT” things on the server.)

Having tried to “PUT” the images, the script kiddies next try to “GET” the image to see if it’s there. “GET” is always allowed by servers. However, the “PUT” attempt failed, so my server responds with a ‘404’:
108.61.14.235 - - [30/May/2014:02:00:27 -0700] "GET /nyet.gif HTTP/1.1" 404 2357 "-" "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0)"
Presumably, if the script kiddies had seen a “200” they would proclaim victory and report their hackage to Zone-H. org. As they saw 404, it appears they continue, submitting a half dozen or so requests in roughly 2 seconds:

108.61.14.235 - - [30/May/2014:02:00:27 -0700] "GET /components/com_jnews/includes/openflashchart/php-ofc-library/ofc_upload_image.php?name=a HTTP/1.1" 503 469 "-" "Mozilla/5.0"
108.61.14.235 - - [30/May/2014:02:00:27 -0700] "GET /administrator/components/com_acymailing/inc/openflash/php-ofc-library/ofc_upload_image.php?name=a HTTP/1.1" 503 469 "-" "Mozilla/5.0"
108.61.14.235 - - [30/May/2014:02:00:27 -0700] "GET /components/com_community/index.html HTTP/1.1" 404 680 "-" "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0)"
108.61.14.235 - - [30/May/2014:02:00:27 -0700] "GET /index.php?option=com_jce&task=plugin&plugin=imgmanager&file=imgmanager&method=form&action=upload HTTP/1.1" 503 680 "-" "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0)"
108.61.14.235 - - [30/May/2014:02:00:28 -0700] "GET /index.php?option=com_media&view=images&tmpl=component&e_name=jform_articletext&asset=com_content&author= HTTP/1.1" 503 680 "-" "curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.15.3 zlib/1.2.3 libidn/1.18 libssh2/1.4.2"
108.61.14.235 - - [30/May/2014:02:00:28 -0700] "GET /en/components/com_jnews/includes/openflashchart/php-ofc-library/ofc_upload_image.php?name=a HTTP/1.1" 503 469 "-" "Mozilla/5.0"
108.61.14.235 - - [30/May/2014:02:00:28 -0700] "GET /en/administrator/components/com_acymailing/inc/openflash/php-ofc-library/ofc_upload_image.php?name=a HTTP/1.1" 503 469 "-" "Mozilla/5.0"
108.61.14.235 - - [30/May/2014:02:00:28 -0700] "GET /en/components/com_community/index.html HTTP/1.1" 404 680 "-" "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0)"
108.61.14.235 - - [30/May/2014:02:00:28 -0700] "GET /en/index.php?option=com_jce&task=plugin&plugin=imgmanager&file=imgmanager&method=form&action=upload HTTP/1.1" 503 819 "-" "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0)"

In the above, notice the script seems focused on the /en/ directory. This is not a consistent feature, it might look for in the main directory, or in ‘/forums/’,’/administrator/’, ‘/Configuration%20Management%20%20Release%20Engineering/’,’/archive/’,’/view/’ or any number of other directories. I don’t know how it guesses, but I suspect these are common directory names under some popular CMSs.

There are a few variations to look for. The script might substitute ‘%2E’ for ‘.’ or look for .txt files. The key is the “nyet”.

207.7.94.35 - - [30/May/2014:13:31:52 -0700] "PUT /nyet%2Egif HTTP/1.1" 405 473 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; de-LI; rv:1.9.0.16) Gecko/2009120208 Firefox/3.0.16 (.NET CLR 3.5.30729)"
207.7.94.35 - - [30/May/2014:13:31:52 -0700] "GET /nyet.gif HTTP/1.1" 404 2316 "-" "curl/7.19.7 (i386-redhat-linux-gnu) libcurl/7.19.7 NSS/3.13.1.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2"
207.7.94.35 - - [30/May/2014:13:31:52 -0700] "PUT /nyet%2Etxt HTTP/1.1" 405 471 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; de-LI; rv:1.9.0.16) Gecko/2009120208 Firefox/3.0.16 (.NET CLR 3.5.30729)"
207.7.94.35 - - [30/May/2014:13:31:53 -0700] "GET /nyet.txt HTTP/1.1" 404 677 "-" "curl/7.19.7 (i386-redhat-linux-gnu) libcurl/7.19.7 NSS/3.13.1.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2"

Notice my server returned a combination of “404” (missing resource) and “503” http responses.

The 503 responses spring from Dreamhost having coded some script to automatically block plugins that are known vulnerable and which are cause customers so much grief that Dreamhost itself is blocking requests to these plugins. In particular “ofc_upload_image.php” is blocked by Dreamhost. The specific reason is that it most likely that this image uploader suffers from the well know “timthumb” issue: that is users (and 3rd party hackers) could upload malicious scripts using that file. Once that was done, the hacker could take over your site. Dreamhost knows this and also knows that no one wants hackers to upload scripts that permit the hackers to take over sites. So, Dreamhost takes proactive steps to block these requests.

Since we were previously discussing what constitutes a hack: Well, the above is a hack(in the ‘back’ tried to crack into the server sense not the “did something clever” sense). Key features that establish this as a ‘hack’:

  1. They tried to upload material to my site. The “PUT” alone is evidence of this. Queries like “?option=com_jce&task=plugin&plugin=imgmanager&file=imgmanager&method=form&action=upload” also look suspiciously like attempts to ‘upload’ material. No one believes that running a web server implicitly authorizes visitors to upload material.
  2. These are not “obscure” uri’s I created but “hoped” no one would find. These are uri’s that point to nothing. They are guesses that would resolve to known vulnerable resources which would permit the person requesting to hack into the site if these resources were available on my site. (They aren’t available, but that’s another matter). No one believes that running a web server implictly authorizes anyone to guess uris the guesser thinks almost certainly does not exist particularly not if those uri’s match known vulnerability patterns.
  3. This group is performing similar hacks elsewhere. Their successful hack attempts are archived to show the world they succeeded in uploading the image. They seem to sometimes deface homepages too. So we know these are attempts to upload because that’s what the group has been doing– and bragging about in public.

That pretty much shows how the above is a hack of the “tried to break into the server” sort. It’s not simple attempts to view material that is hosted on public facing servers– it represents attempts to upload stuff. ( I suspect it’s illegal. It’s unlikely I could catch these guys who may not be in the US in anycase. Also unlikely I could interest law enforcement in this activity.)

How to avoid getting hacked.
For those who might arrive here later through google, one might wonder: How did I managed to escape being hacked? Oddly, it’s partly luck, partly good planning and some degree of vigilance.

It’s worth pointing out that the people involved in this hacker/defacement game are skilled at defacing pages. If I had a vulnerable server or hosted vulnerable plugins, my ‘security’ would do little to stop this particular group. My security scripts do notice these attempts and do ban IPs that attempt this sort of thing Cloudflare thereby limiting a hacker to about 5 seconds of ‘wild guesses’ before being banned a Cloudflare. This does make it harder for many script-kiddies who program scripts to keep guessing and guessing potentially vulnerable uris for (I kid you not) hours on end. This does increase my security relative to the numerous unskilled script kiddies out there. (Unskilled script-kiddies greatly outnumber skilled ones. They can still succeed in hacking into a server and steps to prevent that are worth undertaking even if the protections can be out-manouvered by the hacker equivalent of ‘cat-burglars’.)

This particular group of hacker/defacers trying to upload ‘nyet.gif’ use lots of proxies and jump in fast. Their script appears to spend about 2 seconds using a particular IP and then return with a new and different one making new guesses. Each IP does get banned a Cloudflare for a few days and so becomes unavailable for use in an attack. This potentially makes it a bit harder for them to succeed in hacking (which I consider a good thing) but I wouldn’t be at all surprised if the group doesn’t use thousands of IPs. That means their IPs have plenty of time to make plenty of guesses.

I don’t delude myself that they could not succeed in finding a vulnerability if one existed. If I host a vulnerable plugin and the script-kiddies correctly guess and request it using a fresh IP during the few seconds before it gets banned at Cloudflare, I could get hacked. This is true for everyone.

So, why did this group of fairly skilled hackers fail to upload “nyet.gif?”

I avoided being hacked mostly by never installing the specific vulnerable plugins the hackers are hoping to exploit using their current script. They could guess anything directory they liked: that plugin is not to be found on my server. In addition, Dreamhost has taken some action to protect me from myself. Had I installed that plugin, I likely would have discovered it no longer worked: Dreamhost’s decision to block direct requests might have rendered it disfunctional. (If I were ignorant, I might have groused about the lack of functionality. )

I think Dreamhost itself forbids the ‘PUT’. But if they did not, it happens I also forbid “PUT” in .htaccess. That makes the ‘PUT’ attempts fail. With respect to site security, redundancy can be a good thing, especially when you aren’t sure whether your hosting company keeps up to speed on the list of vulnerable plugins or hacking behaviors being exhibited out ‘in the wild’.

Even more fortunately, developers writing plugins for WordPress are becoming more security minded. They tend to preface plugins with commands like
if ( ! defined( 'ABSPATH' ) ) die( "Aren't you supposed to come here via WP-Admin?" );
This prevents a plugin containing that bit of code from actually doing anything if called directly rather than through WordPress itself. Limiting use to ways that are intended reduces the potential for harm even if the plugin has some other vulnerability. Unfortunately, the practice of adding code to ensure the plugin is only functional if called through WordPress itself is not yet universal. But the fact the practice is increasing is one reason why people should update plugins regularly. I do update plugins as soon as newer versions are available.

With that, I’ll end my saga of describing something that really is a hack attempt and why it didn’t work (this time. I’ve got my fingers crossed that future attempts also won’t work.)

For those wanting discussions of temperatures: With any luck, Roy will announce temperatures soon and the next post will be about climate or weather. 🙂

Stupid Script Kiddies (or really dumb spambot.)

For those who sometimes enjoy seeing the truly stupid things script kiddie do, here’s an interesting request from my server log:

190.75.219.125 - - [20/Jun/2013:13:30:55 -0700] "GET / HTTP/1.1" 200 366 "http://rankexploits.com/musings/2009/the-trouble-with-revkins-critics/+++++++++++Result:+forum+not+found+/+could+not+find+IP+Result:+forum+not+found+/+could+not+find+IP+Result:+forum+not+found+/+could+not+find+IP+Result:+this+IP+is+banned+-+changing+proxy+1+one;+no+post+sending+forms+are+found;+Result:+forum+not+found+/+could+not+find+IP+Result:+forum+not+found+/+could+not+find+IP" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.204 Safari/534.16"

Zbblock blocks this by default. But I had to chuckle at a script so buggy that it adds what must be the error message to the uri. Let’s see “forum not found”? “This IP is banned”? “Changing proxy?” “No post sending for ms are found”? This has got to be the buggiest spambot ever!

Got blocked? Workaround.

Over the past few months, I’ve received a few emails from honest-to-goodness people blocked because they are connecting form China. I’ve always gotten emails from people whos IPs were — through no fault of their own– listed in “StopForumSpams” spammers list. I’ve generally given people advice, and when necessary tweaked the bot-blocking software to create single case exceptions. However, there are enough single-IP exception that I decided I need a broader “workaround”. I have currently devised the following workaround which works to get around a few specific ‘rules’ in the blocking software. The work around is this:

Spoof your user agent to read “Lucia Special user agent for The Blackboard”. It must read exactly that but you need only spoof when visiting The Blackboard. Use of this user agent will:

  1. Permit those Cloudflare identifies as being from China but who Cloudflare does not yet block to pass through ZBBlock.
  2. Permit IPs that make it through Cloudflare but who are are listed at Stop Forum Spam to comment without getting banned.

The description of what it does may sound convoluted. The path is you->Cloudflare->ZBblock->the joy of reading The Blackboard.

The special user agent will only help at ZBblock which has a rule to bounce you if you come from “CN”. These people are presented a message telling people to contact me if they wish to be whitelisted. I have about a dozen Chinese IP ranges whitelisted). The difficulty for people in China is that many Chinese IPs have been permanently banned owing to the numerous hack, spam or scrape attempts form Chinese ranges. So, if you happen to know someone who mentions they were banned, then possibly they can learn about the issue and contact me. A similar difficulty happens with IP’s listed on Stop Forum Spam. For example: a fairly frequent visitor signed on to a new ISP service, obtained a static IP and found himself blocked. The difficulty is that quite likely a spammer previously used that IP. While this can probably be sorted out, in the meantime, he would like to comment. So, spoofing the user agent trick will work.

How can you spoof
If you use Firefox it is very easy to spoof agents. Visit useragent switcher, install and start switching. For reasons of privacy and functionality of other web sites, you will probably want to adjust the setting to switch only when you are at “http://rankexploits.com” and turn off user agent switching otherwise. Turning the spoofing switch on and off is a bit of a pain in the neck, but it’s somewhat more convenient than getting banned, writing me, having me code a workaround for your IP and then visiting again. So if your IP is listed on StopForumSpam or you are in China, this should be a convenience.

You can test the effectiveness of your switching by testing the switch for “http://whatsmyuseragent.com/” and afterwards seeing if that worked.

Owing to the possibility that bots operated by actual humans might wish to spam my comments, the special useragent may change from time to time. For now this should work like a (somewhat cumbersome) charm.

How Constant are Hacking Attempts?

From time to time, valued visitors are banned. This happens most frequently when I am adding new “rules”. Unfortunately, sometimes the rules — especially new rules– hammer people. Those people are often at universities and research institutions. They are naturally puzzled to a rule might cover their institute. It may seem odd, but IPs at universities and research agencies often do look suspicious. I suspect the reason for this is related to the fairly transient population of students, graduate students some of whom might decide to get involved in a little “private enterprise”. Also, from point of view of security– universities often have a rather open and free-wheeling nature valuing novelty and research. But it is fairly apparent that “research” bots sometimes behave rather badly during the development stage. Bad bot behavior can be due to bugs, incomplete implementation of the design or — quite likely — an oblivious designer who doesn’t think through how their nifty new agent might affect operation of a site it hits. (Note that the oblivious designer might be an undergraduate assigned the task of writing a scraper for their course on “intro to scraping”; their priority is generally “get the assignment done in time to turn in”.)

Anyway, over the past two weeks three or so people wrote to tell me they got banned. Needless to say: I was adding new rules and some were “less than perfect”. I you all for your patience and especially thank those who write telling me the script made a mistake.

But enough of the chit-chat. Today, I want to show you just how bad it really is.
After the “more” tag, I have pasted a list of the IPs banned for connections that appear highly likely to be actual hacking. This list is limited to 15 days worth of attempts that were diagnosed as “hack” or “penetration testing” attempts. This is actually a small fraction of the bans— some bans are just “snoop”, “scrape”, or “spam” attempts. The following list is limited to connections whose “reason” for banning includes the word “hack”. (Note the “reason” is trimmed to a finite length. Some of these things violate numerous rules so the word “hack” might not appear in the reason displayed.)

Continue reading How Constant are Hacking Attempts?

Blocking TOR (because it’s a nightmare.)

I wanted to post a quick note on blog changes. I’ve decided I can no longer tolerate the periodic hack attempts coming from free or very cheap anonymizing services. These include TOR exit nodes and free anonymous proxies like “hide my ass”.

  1. I am now systematically identifying and blocking TOR exit nodes. The method is not fully automated yet but involves getting a list of TOR nodes currently in operation http://torstatus.blutmagie.de/ip_list_exit.php/Tor_ip_list_EXIT.csv, identifying which I have not yet blocked and blocking those. In the next few days I will script-i-fy this so I can run a cron job. Owing to the nature of TOR, all TOR exit nodes will be banned at Cloudflare.
  2. After having banned numerous known SPAM/HACK/NASTY servers and ISP’s, I have extended ZBblock to detect extra IPs in the ‘HTTP_X_FORWARDED_FOR’ header variable. The additional IPs are run through Zblocks list of bad IPs. If an otherwise innocent looking IP is being used to mask a known-nasty IP, the previously thought innocent IP will be banned. Also, any Brazilian IP that comes hiding IPs in the ‘HTTP_X_FORWARDED_FOR’ header will be banned. (If you are in Brazil and can think of why this might be a problem, let me know. But I’m tired of seeing Brazilian’s hiding spammy Chinese IPs. A few other countries will be treated similarly.)
  3. I have been blocking many known anonymous proxy IPs and will begin doing so more systematically. In the past, my script Cloudflare-banned IPs that were caught trying to hack. I normally unban those after 7 days, but my unban script keeps the ban in place host includes a word like “proxy”, “private” or “anonymous”. I will be escalating by visiting “http://proxy.org/proxies_sorted2.shtml” which lists proxies by IP, and banning IPs of proxies found listed on that page. The list seems to change at least daily, so I will be writing a script to read those IPs and setting a cron job to get those all banned.

    I know many people use anonymous proxies at work or on travel. If you must use an anonymous proxy service, please let me know the name of the service so I can make an exception for that service. If I know in advance, we can develop a workaround that permits me to screen out as many of stupid resource sucking crawlers while providing screens to people who really do need to use some sort of proxy. (If you are a stranger using ‘social engineering’ to try to carve out a hole for your seo company, try not to make the story to ridiculous. Plz. )

For those wondering if banning all these things really helps: Yes. It does.

I started banning TOR exit nodes the day before yesterday. I have seen a substantial drop in hacking attempts — particularly of the “timthumb”, “uploadify” or similar variety. I haven’t computed numbers, but my swag is The error lots are 50% shorter by number of entries and 90% shorter by unique IPs. (Some individual IPs will come in and hammer a while before I can ban them at Cloudflare.)

I am also seeing gaps of as much as 1 hour between errors in the error logs. These tend to log attempts to connect to missing uris. They fill up withe bots trying to hit non-existent pages containing words they guess. The words are usually things like”register”, “login”, “sign_in”. Today’s error logs have the wonderful-to-me feature that more than 1/2 the failed IP’s were trying to load broken links rather than hack-signature uris.

Hour long gaps are unprecedented in the past two years. So, banning TOR and free proxies really is helping.

I’ll be automating a lot of this tomorrow. Feel free to pipe in and give advice especially if you can help me find the IPs for free anonymous proxies more efficiently.

I’m now off to buy ingredients to make pie. The Women of the Moose requested pie for the meeting tonight. I think I’ll make apple.

Wars on Hackers: /crosdomain.xml


Question for those more IT proficient than I. If I am wrong about a certain block, I want to eliminate a particular rule that has blocked a human. But if I am right, I want them to figure out what is wrong with their browser.

First let me describe two of the rules I use to block access to my site.

  1. If a browser tries to load “…./crosdomain.xml” I block that connection.
  2. If a browser presents a cookie with the name “mp_72366557fd3f1bd4fc7d4bfca5cd0a12_mixpanel” to my site, I block that connection. The reason is that there is no javascript, php or anything else that sets a cookie of that name at my site. (Or at least I think there isn’t!)

Note: Every single time I have seen a browser present that cookie name it has gone on to request “…./crosdomain.xml”. Also, “…./crosdomain.xml” does not exist anywhere on my site.

However, if I am wrong about this rule, I would like to lift it. Otherwise, I want to know what and why something is trying to connect to that resource– which does not exist. Is this something like a favicon.ico that I ought to create? Or what? I note that only a very small fraction of browsers try to hit it– but obviously, if requesting that is becoming some sort of routine, I don’t want to be blocking people.

#: 66276 @: Tue, 19 Jun 2012 07:19:43 -0700 Running: 0.4.10a1
Host: ----blanked out--
IP: ----blanked out--
Score: 2
Violation count: 2
Why blocked: ; You asked for crossdomain.xml ? Hack. bad cookie:(mp_72366557fd3f1bd4fc7d4bfca5cd0a12_mixpanel,); cookies:(good:7 other: 0 length:0) ( 0 ); c= AU
Query:
Referer:
User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; AskTbIMH6/5.13.1.18261)
Reconstructed URL: http:// rankexploits.com /crossdomain.xml

(Note: I do have many draconian blocks. If people ask nicely I’ll look into it. But I did get whammied after a complete stranger on Bezequint requested I unblock that site…. so I am super cautious if someone whose name I don’t recognize emails me or if they resort to irony in their first email and so on. Sorry….but… well.. In this case, someone asked nicely. I’m sure he’s human. I’d like to find out if my block is wrong. )

Update: Some are just suggesting I just ignore these attempts to access /crossdomain.xml for no good reason. However, I would like to point them to what Wikipedia which mentiones this resource in their article aboutCross Site Forgery which they describe as “under-reported”. Here’s the relevant bits :

The attack works by including a link or script in a page that accesses a site to which the user is known (or is supposed) to have been authenticated.[1] For example, one user, Bob, might be browsing a chat forum where another user, Fred, has posted a message. Suppose that Fred has crafted an HTML image element that references an action on Bob’s bank’s website (rather than an image file), e.g.,

If Bob’s bank keeps his authentication information in a cookie, and if the cookie hasn’t expired, then the attempt by Bob’s browser to load the image will submit the withdrawal form with his cookie, thus authorizing a transaction without Bob’s approval.

Note: Cookies are used to identify which users are “known” to be authenticated. That’s why I worry about browsers presenting cookies I did not set.

CSRF attacks using image tags are often made from Internet forums, where users are allowed to post images but not JavaScript.

Note that users are allowed to post images here– but not JavaScript. This is common to WordPress blogs and would be a reason why a script kiddie might try this attack at a blog (or even why someone might just write malware that rides along making attemps wherever it goes).

Web sites have various CSRF countermeasures available:

  1. Requiring a secret, user-specific token in all form submissions and side-effect URLs prevents CSRF; the attacker’s site cannot put the right token in its submissions[1]
  2. Requiring the client to provide authentication data in the same HTTP Request used to perform any operation with security implications (money transfer, etc.)
  3. Limiting the lifetime of session cookies
    [Since I don’t set these, I’m watching for faked cookies whose lifetime I obviously cannot control.]
  4. Checking the HTTP Referer header or(and) Checking the HTTP Origin header[16]
  5. Ensuring that there is no clientaccesspolicy.xml file granting unintended access to Silverlight controls[17]
  6. Ensuring that there is no crossdomain.xml file granting unintended access to Flash movies[18]
    [I don’t have this file. But I am suspicious when something tries to request it for no reason. –l]
  7. Verifying that the request’s header contains a X-Requested-With. Used by Ruby on Rails (before v2.0) and Django (before v1.2.5). This protection has been proven unsecure[19] under a combination of browser plugins and redirects which can allow an attacker to provide custom HTTP headers on a request to any website, hence allow a forged request.

There is more. But basically, unless I read of a justifiable reason why a browser hits /crossdomain.xml, I’m continuuing this block.

Cookie comment policy (& Bots loaded for bear.)

First: Thanks Paul K for the vert interesting post! Second: We had out of state family visitors. (Jim’s cousin and her husband, Ruth and Tim Casteen from Virginia.) Third: I am announcing a new comment policy I began testing this morning. If it causes no problems, it will become permanent. The policy is:

To comment, your browser must accept a cookie from my domain.

You do not need to accept 3rd party domain cookies. You can limit cookies acceptance to those that expire when your browser session ends. In fact, for now, you can limit accepting cookies to the one named ‘zbb_1’. (WordPress does set some other cookies as do Cloudflare and Bad Behavior. You are not required to accept those to comment.)

The motivation for my policy that requires you to eat the force-fed ‘zbb_1’ cookie is as follows:

Examining Bad Behavior logs, I noticed that quite a few bots present cookies. However the cookies they present are spoofed. That is: They were not set by my domain. For example, the bots present me with cookies with names like ‘blogger_TID’, ‘ASPX’ or ‘vbseo’. I’m guessing, but I suspect the first is a cookie name that Blogger looks at to pre-fill in comment forms, the second is something set by something running an active server page and ‘vbseo’ is set by something running visual basic and for some reason setting a “search engine optimization” cookie. Other cookies have names that suggest the ‘bot is trying to pre-fill a shopping cart.

One thing I know: I (that is my domain) didn’t set these. Given the conventions regarding cookie exchanges that means the bot shouldn’t be presenting these back to me.

Some of these bots present hundreds of pointless cookies that no real honest to goodness visitor will be putting on the ‘plate’ they would hand to me.

Seeing this, I want to come up with a strategy to catch these in ZB Block and then ban their IPs at Cloudflare. I came up with three strategies:

  1. If the request presents me an obviously ‘bad’ cookie name (e.g. ‘blogger_TID’) I block that in ZB Block.
  2. If the request presents me with more than 30 cookies, I block that in ZB Block.
  3. If the request presents cookies, but does not accept the cookies I set, I will block that request in ZB Block.

I’ve implemented the first two.

The third is a potentially very powerful method of catching script-kiddie bots programmed to present cookies they think will permit them to comment, but not programmed to take requests to set any cookies. (This would be very weird behavior for a browser.)

However, because I’m uncertain about the reliability of cookie setting and unsetting commands, I don’t want to do the 3rd method until after I “see” what happens if I start blocking some things that don’t accept cookies. So, in that light, today I’m forcing commenters to accept cookies (and watching my logs). As I said, if there are no big problems, I’ll continue with that policy.

BTW: Thanks BillC for being the first guinea pig. He was caught within a few minutes of my adding the block requiring people to accept cookies.

Volunteer Testers for Ban Nasties Plugin.

The ‘Ban Nasties’ plugin has been working fine — without incident — for nearly week now. If there are a few brave souls out there (or people who have testing platforms that can’t get too screwed up) I’d like volunteers to test it out (and possibly tell me if it’s got any security vulnerabilities etc.) To do so, visit:

http://rankexploits.com/protect/2012/04/ban-nasties-plugin/

Leave a comment at that post, and I’ll get you the address for the plugin as currently constituted.

Those wishing to discuss the Ban Nasties plugin (or even the ‘weird’ event last night which did not involve my plugin), visit the side blog.

Feel free to continue discussing climate here. And… it’s a good time for someone to write a guest post. 🙂

Comment Control for WordPress: .htacess rules

I’ve managed to ban a sufficient number of “cracker” type bots that a significant fraction of the remaining ‘bot’ load is from fairly comment spam bots that do the following stupid things (in order of frequency):

  1. Claim they are the googlebot by spoofing the user agent.
  2. Claim they are referred to “wp-comments.php” from the home page of my blog, the domain root or nowhere at all.
  3. Give no user agent.

In contrast, honest-to-goodness commenters hitting the comments provide a referrer that points to a blogpost (e.g. ” http://rankexploits.com/musings/2012/bugs-may-find-this-sad/ “) not the home page ( “http://rankexploits.com/musings/ “) or root ( “http://rankexploits.com/).

Honest-to-goodness do not claim they are the googlebot, who is never actually inclined to comment. By the way: Most the bots claiming to be the googlebot are from Brazil. They do manage to get a few comments into the data-base. Akismet keeps you from seeing them but they are sufficiently not-stupid that I have to empty the spam bin.

I was catching these comment spammers — especially the Brazilians by logging all hits to WordPress in 15 minute long files and then running a clean up script. But that Brazilian bot manages to dance quite a few sambas before I get it. So, I’m now forbidding these in .htaccess. (I later send all 403’s to a script ; one of the things this does is report things spoofing the googlebot to cloudflare in real time. So this will trim the Brazilian comments spammer’s dance card.)

For wordpress bloggers who merely want to reduce the cpu and memory load on their servers, I recommend the following bit of code in .htaccess:

# comment controls
# if referrer is root, homepage or not my blog.
RewriteCond %{HTTP_REFERER} ^http://(.+\.)?rankexploits\.com/?(musings/?)?$ [nc,or]
# is not from the blog itself.
RewriteCond %{HTTP_REFERER} !^http://(.+\.)?rankexploits\.com/musings [nc,or]
# claims it's a spider or has a blank user agent.
RewriteCond %{HTTP_USER_AGENT} (google) [nc,or]
RewriteCond %{HTTP_USER_AGENT} ^$
RewriteRule wp-comments-post.php$ - [F,L]
# end comment controls

You’ll need to tweak line #1 above to use at your blog in the following way:

  • if hosted at the top of your domain, replace the first line with:

    RewriteCond %{HTTP_REFERER} ^http://(.+\.)?mydomain\.com/?$ [nc,or]

  • If hosted in a subdirectory (e.g. “blog”) replace the first line with

    ^http://(.+\.)?mydomain\.com/?(blog/?)?$

where “mydomain.com” is your domain name and “blog” is you subdirectory.

Should the Brazilian bots start to pretend they are bing in addition to google, you can change (google) to (goggle|bing). That will forbid commenting by the also silent bing bot.

Regular visitors should see no particular change in commenting. Comments might post slowly, but that’s a (annoying) feature, not a bug. (Also, it’s mostly due to other spam filters, not the .htaccess.)