Ray asked
Ray (Comment #87546)
December 19th, 2011 at 12:49 pm Edit ThisLucia,
I get mine from here:
http://data.giss.nasa.gov/gistemp/
More specifically:
http://data.giss.nasa.gov/gist…..s+dSST.txt
I can’t see the Nov. figure in your second link either.
By the way, is it just me or is connection to your site very slow at the moment? Pages are taking forever to load.Ray (Comment #87548)
At the time, I didn’t didn’t know if the blog was slow. However the correct answer was “yes”. I had to go out after responding to Ray’s comment. When I returned, I saw a message indicating the site went down. Examining serverlogs the likely reason was due to sustained attempts to login by a ‘bot at bzq-109-66-7-15.red.bezeqint.net.
bzq-109-66-7-15.red.bezeqint.net - - [20/Dec/2011:01:17:23 -0800] "GET /musings/ HTTP/1.1" 403 553 "-" "Java/1.6.0_25"
bzq-109-66-7-15.red.bezeqint.net - - [20/Dec/2011:01:17:23 -0800] "GET /musings/ HTTP/1.1" 403 553 "-" "Java/1.6.0_25"
bzq-109-66-7-15.red.bezeqint.net - - [20/Dec/2011:01:17:24 -0800] "GET /musings/ HTTP/1.1" 403 553 "-" "Java/1.6.0_25"
119.75.23.81 - - [20/Dec/2011:01:24:54 -0800] "GET /musings/ HTTP/1.1" 403 500 "http://rankexploits.com/musings/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; Maxthon; .NET CLR 1.1.4322)"
bzq-109-66-7-15.red.bezeqint.net - - [20/Dec/2011:01:25:25 -0800] "GET /musings/ HTTP/1.1" 403 553 "-" "Java/1.6.0_25"
bzq-109-66-7-15.red.bezeqint.net - - [20/Dec/2011:01:25:26 -0800] "GET /musings/ HTTP/1.1" 403 553 "-" "Java/1.6.0_25"
bzq-109-66-7-15.red.bezeqint.net - - [20/Dec/2011:01:25:27 -0800] "GET /musings/ HTTP/1.1" 403 553 "-" "Java/1.6.0_25"
bzq-109-66-7-15.red.bezeqint.net - - [20/Dec/2011:01:26:25 -0800] "GET /musings/ HTTP/1.1" 403 553 "-" "Java/1.6.0_25"
bzq-109-66-7-15.red.bezeqint.net - - [20/Dec/2011:01:26:26 -0800] "GET /musings/ HTTP/1.1" 403 553 "-" "Java/1.6.0_25"
bzq-109-66-7-15.red.bezeqint.net - - [20/Dec/2011:01:26:27 -0800] "GET /musings/ HTTP/1.1" 403 553 "-" "Java/1.6.0_25"
bzq-109-66-7-15.red.bezeqint.net - - [20/Dec/2011:01:29:26 -0800] "GET /musings/ HTTP/1.1" 403 553 "-" "Java/1.6.0_25"
bzq-109-66-7-15.red.bezeqint.net - - [20/Dec/2011:01:29:27 -0800] "GET /musings/ HTTP/1.1" 403 553 "-" "Java/1.6.0_25"
This continues. Also, the hits from IPs beginning with 109 aren’t shown because I wasn’t denying those. These are screened by visits that were served “403”.
For those who want to see a bot operating in it’s full glory, the log of forbidden files is here:Forbidden. I’m now ‘403ing’ anything with user agent Java, anything from host bezequint.net and anything with an IP beginning with 109. Previously, the only block working was on bezequint.net. (Sorry Israelis but want to visit The Blackboard, you will need to some IP other than bezequint.net. I can’t keep the blog running while bezequint.net is hitting incessantly. )
If anyone can advise me on other measures– whether legal or technical, please do.
Update: Sorry British readers with btcentralplus.com (and some elsewhere, like, maybe Germany, united arab emirates etc) I also blocked you. I mistook domaintools description as saying all hits starting with 109. were from bezequint.net. Not so….. I now have a tighter block and am keeping an eye on this. If you do get blocked you should at least see a “friendly” page that will inform you of my email and you can send me email!
Wow, I wonder why they keep trying to DoS you, if they are just crawling your site or whatnot. I wonder how the bigger sites handle this since you’d think they’d be hit with a lot more of these sorts of attempts. Hopefully our resident experts will chime in.
I’m a bit baffled too. I can’t think of a reason why bots would attack the Blackboard in preference to any other site. Do you use any specific/unique software?
I wish I was an expert and could go out bot-hunting… All I can do is wish best-of-luck.
Come on computer-whizzes – man the barricades 🙂
Lucia – when you say the site was down, what does that mean exactly?
What does the the Apache error log say?
Am am curious about the number of hits that are not shown in the forbidden file. In and of itself that is pretty light.
Are you shared, VPS or dedicated?
It looks like you might only have to deny 109.66.7.15
It appears to be a DSL connection.
Lucia,
I am afraid that most of this is above my head and I can’t help on other measures. All I know is that at the time, loading the blog pages was taking a long time, but it is back to normal now.
What surprises me is than nobody else appeared to notice or comment.
I guess 12th warmest year is still kind of cold but how much further does it have to drop to get back to average and is that the 16th warmest year from 1979-2011?
Hope your computer settles down ASU
Kan
For the IPs that got 403 errors because I forbid in .htacess:
[Tue Dec 20 01:17:23 2011] [error] [client 109.66.7.15] client denied by server configuration: /home/ludiary357/rankexploits.com/musings/
[Tue Dec 20 01:17:23 2011] [error] [client 109.66.7.15] client denied by server configuration: /home/ludiary357/rankexploits.com/forbidden.html
Does this mean I have a problem with htaccess? Do I need to figure out how to let them see the forbidden page? (It would seem pointless for me to have that if they never see it.)
I do want to forbid this site. They have gone wild before– but not like yesterday!
Kan–
I added
<Files forbidden.html>
order allow,deny
allow from all
</Files>
So, that should clear up 1/2 the errors. (The baidu spider also triggers lots of this. It’s obnoxious and I [F] it in htaccess.)
But it seems to me that 109.66.7.15 should still have clearly gotten the message it was forbidden. Their bot must just get stuck?
Kan–
More… I don’t know if this is interesting. But when red.bezeqint.net started trying accessing like this (access log)
109.66.7.15 - - [20/Dec/2011:06:22:57 -0800] "GET /musings/ HTTP/1.1" 200 573 "-" "Java/1.6.0_25"
The error logs gave this
[Tue Dec 20 06:22:57 2011] [error] [client 109.66.7.15] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace.So, my .htaccess codes say 200 but I’m getting that long error. (There is a rash of these around the time when Ray complained of slowness. I mean….I’m counting 11 at time 12:29:52, which matches the time of Ray’s comment. Apache logs and blog time zones differ.))
In contrast, when the access log looks like this:
bzq-109-66-7-15.red.bezeqint.net - - [20/Dec/2011:06:12:44 -0800] "GET /musings/ HTTP/1.1" 403 553 "-" "Java/1.6.0_25"The error logs looks like this:
[Tue Dec 20 06:12:44 2011] [error] [client 109.66.7.15] client denied by server configuration: /home/ludiary357/rankexploits.com/musings/[Tue Dec 20 06:12:44 2011] [error] [client 109.66.7.15] client denied by server configuration: /home/ludiary357/rankexploits.com/forbidden.html
(I’m not .htaccess blocking by also specifying that IPs starting with 109. I figure it will change IP’s– and thats’ all bezequint. I’d like a tighter block… but I can’t have the blog crashing or no one visit!)
(By the way, some always wondering why my host doesn’t “protect”, it is. It’s protecting against obvious crap like this
[Tue Dec 20 14:19:47 2011] [error] [client 184.154.128.18] ModSecurity: Access denied with code 503 (phase 2). Pattern match "(?:\\.php\\?act=(chmod&f|cmd|ls|f&f)|cx529\\.php|\\.php\\?(?:phpinfo|mtnf|p0k3r)|/shell[0-9]?\\.php|/\\.get\\.php)" at REQUEST_URI. [file "/dh/apache2/template/etc/mod_sec2/gotroot/50_asl_rootkits.conf"] [line "57"] [id "390146"] [rev "17"] [msg "Atomicorp.com - FREE UNSUPPORTED DELAYED FEED - WAF Rules: Command shell attack: PHP exploit shell attempting to run command"] [data "/shell.php"] [severity "CRITICAL"] [hostname "rankexploits.com"] [uri "/wp-content/plugins/auto-attachments/thumb.php"] [unique_id "TvEKA0WjyWgAACscPzsAAAAA"]
But this stuff is 4 hours after Ray’s comment about slowness and it’s also long after the blog crashed. But the host does a lot of protecting. But… there is a limit.)Sigh….
109. is too tight. I blocked Josh and most of England…. Sigh…
I guess I need to watch and figure out which are bezequint!
You might also check your resources. That is, see which process is running out of memory or CPU or bandwidth. Maybe more memory or limiting the process or something can help.
Why are you blocking all of 109.0.0.0/8? Can’t it specify the host or something more specific?
Kevin–
Thanks! But I’m going to need you to give me a bone here.
Explanation: I am not an IT person, so when you say “check your resources”, I need you to follow up by providing a nuts and bolts explanation of how you think I would do that. The explanation might be something like : Fire up unix. Connect. Do the following commands. Etc. Or “run the following script”. Of something.
When dreamhost checks,they discover that “php” is using too many resources. Specifically, it is running out of memory. This is useless information. (And for that reason, I’ve never been motivated to learn more about how to “check resources”.
Basically, I already know the main resource running is php. It’s used to run WordPress. But by resource, you think I can discover something other than “php” is using too many resources , let me know. Also, explain how to do whatever it is you want me to check.
On memory: I know I am running out of memory, that’s why I have to reboot. If I want to spend an infinite about of money, I can get more memory. But I don’t want to spend a zillion dollars a month just to permit a wild, badly programmed bot to try to load the same page 11times a second all day. All those loads running in parallel can grab a lot of memory because merely loading WP in memory hogs tons of memory.
Neverthless, I know limiting resources might be useful. In terms of nuts and bolts, can you tell me how to do that? Obviously, blocking the wild ‘bot is a method of limiting resources to that bot. But other than that, how do I do what you are suggesting in a way that doesn’t basically let the ‘bot limit resources for desirable users.
Kevin–
I don’t know. Can you tell me how?
Here’s background:
I was blocking bezenquint.net, using
deny from bezeqint.net
Things that showed up in serverlogs as bzq-109-66-7-15.red.bezeqint.net did not get in. So, that seemed to “work”. However, things that show up as IP 109.66.7.15 which corresponds to bezeqint.net, were permitted in.
I am not an IT person, so I don’t know why this happens.
I blocked all of 109. because I thought the entire range was bezequint. But it’s not.
I would love to have a narrower block. If you can tell me how to block bezequint.net without blocking others, I will try it. But otherwise the answer to why I don’t do that is “I don’t know how.” My not knowing how doesn’t mean it can’t be done– it means I lack knowledge. So, if you can advise me how, then I might be able to do what you suggest.
In linux showing processes is the command “ps” and to rank them in real time is the “top” command. However, the key will be to know which one is the problem. This can be tricky and time consuming. Sorry I do not know the system you are using so I won’t be much help. I know routers. Do you have a firewall or router? Because you could likely more easily block them there too.
To block IPs that depends on your systems syntax and there are more than one way to specify hosts or smaller networks. If I explain it the way a router blocks IPs it might just make it more confusing. In Linux you can just send all there requests to the bit bucket, which will be something like “/sbin/route add -host 109.66.7.15 reject” and then to restore “/sbin/route del -host 109.66.7.15 reject”.
Lucia it might be easier to just contact their administrator. According to whois, this is the contact info:
Carrick,
Yes, that is good but the report to the abuse email will take time to process. Lucia also needs a fix that will work in real time when the next one occurs. Blocking the DNS or IP of the offender is still reactive but more timely than contacting the administrator of the offending network. There are several potential solutions if she can find out what resource is getting flooded. Possibly more RAM or CPU or a distribution of the processing/resources with a second computer or upgrade can solve her problem. Or possibly limiting the number of connections or requests from any host could also be the solution. It’s hard to say without knowing the true bottleneck. There is free and trial server monitoring software that might help Lucia. Check out http://sixrevisions.com/tools/10-free-server-network-monitoring-tools-that-kick-ass/
Translation please? Heh!
Kevin–
I don’t run my server. I have an account on a machine run by others. So, I don’t think I can use that sortware. Looking at all the read outs, I still don’t see how I would learn anything useful beyond what I already know which is:
1) Bezeqint is hammering the site by calling a resource intensive thing over and over and over again– dozens of calls a second.
2) When the resource intensive thing is called, that creates a huge memory draw and the site goes down.
So, Bezequint is– or is at least trying to– make multiple connections simultaneously. I don’t need that software to know this. It’s obvious in the logs.
I agree “limiting the number of connections or requests from any host could also be the solution” on my system. My approach is to ban Bezeqint. But…in future, I am sure there will be other ‘bots gone wild’ if you know how I could limit “the number of connections or requests from any host” I could try it.
But I cannot implement your general idea because I don’t know how to do it. Do you know how it might be done? If so, could you suggest how?
All I know is that it’s Apache.
I’m on shared hosting. So… I think no. I have a “virtual private network”… but I don’t know if that gives me a virtual firewall or virtual router.
I don’t think I have access to do this sort of thing with my account.
from Google translate:
Maybe contact your server provider and explain your problem. You are unlikely the first to have this problem. Look into switching providers if you are unsatisfied.
Kevin– I contacted my server as you suggested.
George– Reading the translation, I have a feeling I’m going to have to get my host involved….. I wonder if what I’m seeing is “scanning/ intrusion or harassment”? Anyway, I can’t communicate in Hebrew– maybe my host can.
Lucia,
i can confirm that your site was definitely fairly hostile to uk visitors last night – 403 forbiddens, left right and centre.
🙂
There seem to be quite a few sites suggesting bezeqint bots are tied in with some picture copyright vendettas? ZBBlock certainly doesn’t seem too fond of them.
Don’t get too hung up on the PHP, focus on the fact that you are talking about an Apache server with limited RAM memory.
However they are generated, that Apache web server serves up HTML web pages in response to client requests coming in over the internet. All such requests are handled by child processes set up by the Apache server. ‘Worker threads’ if you like.
When something like your rogue bot hammers the server with rapid multiple requests additional worker threads can be spawned to handle the pseudo-load, and if the server config has not been carefully matched to the actual config (ram available vs number of threads, etc) it will very rapidly kill the server.
Tweaking you server settings should prevent them killing your server with the bits, but does not solve the problem of the bots themselves. Advising you on this would require a fair bit more info on the actual Apache config used – which prefork?, fastcgi?, PHP mem and Apache setup for certain params? etc etc.
And if all else fails, maybe you could install customised error pages?
http://dailypicdump.com/81a3f5/server-errors-by-cats-15-pics
🙂
Lucia:
I’ve had a number of dealing with Israeli businesses & I’ve never found language an obstacle. (From my limited experience, the only English speakers Israelis suddenly have trouble understanding are rude tourists.)
Chuckles–
Yes. I think it’s related to this:
http://rankexploits.com/musings/2011/copyright-legal-eagles/
It is rumored that picscout used by getty is operated on bezeqint.
I’d let ZBblock block them. Because ‘bots have been rampant since.. oh.. june? August? I was using ZBblock before I got the stupid getty letter or ever heard about this.
But there are some bots that you just have to take in hand because they hammer the site so badly that overloading ZBblock ( a light php application) is too much. ) These include baidu, a “internet club” in china and ….well at least recently bezenquint!
That said, if it is picscout, this is a really stupid bot at picscout, I don’t see how or why it would be remotely useful for them to try to load the front page of the blog over and over and over and over and over more than 10 times a second! The thing just has to be stuck somehow.
Carrick–
I sent it to my best friend since highschool Her answer “have forgotten Hebrew. Too bad after 10 years of forced attendance at Hebrew school. Ugh! Sent via BlackBerry from T-Mobile” (She spent summer picking oranges in Israel. So… you’d think she might remember! )
Well “109.66.7.15” seems to have given up. But someone on 109.230.xxx.xxx is trying. They are from “Germany Chemnitz Marcel Edler Trading As Optimate-server ”
I’d think I’d gone overboard but at least one of them is wildly spoofing useragents.
1) “Mozilla/4.0 (compatible; MSIE 4.01; Digital AlphaServer 1000A 4/233; Windows NT; Powered By 64-Bit Alpha Processor)”
2) “”Mozilla/4.0 (compatible; MSIE 6.0; AOL 9.0; Windows NT 5.1)” ”
3) “”Mozilla/4.76 [en] (Windows NT 5.0; U)””
etc. all go with 109.230.245.197
The others 109.230’s might be innocent. Sigh….
I have friends who know Hebrew if you have a need, but my recommendation is to send them the complaint in English along with as much detail as possible, which I suspect you’ve already done.
The Hebrew message basically just says to do that (via translate.google.com):
Lucia, Yes it’s very possible the bot is broken, many are.
The problem is that rapid hammering at the front door is causing your server to spawn new worker threads to handle these spurious requests, using up memory in the process. When the bot goes away, those now redundant worker threads/processes can hang around for up to ‘forever’, crippling your system until you do a restart.
The default Apache settings are NOT your friend in this scenario.
Lucia-
I run my own site and have had to investigate problems similar to this and here are a few things…
First… I think you mean you have a vps (virtual private server) and not a vpn.
Anyway…
Many vps accounts will give you “SSH” (not SSL) access to the command prompt. Check your control panel for a link to run a java based connection (browser pop-up) or download a program like “putty.exe” to gain access. If you have SSH access, then you can do things from the server command prompt.
Now, lets looks at what is happening. A request is being sent past the router, to the server, to the Apache program. The server is getting hammered because Apache (application name is “httpd”) is overwhelmed trying to deal with all the requests. It has to open a thread to handle the request, figure out what it should do with the request (show the website itself or the redirect page), and then sends that page. That eats up resources, (though less than if wordpress was doing the blocking).
The best solution is to block at the router (request never even reaches the server). However since that is usually not available for VPS users, a software firewall on the server is the best option. It inspects all requests that hit the server to see if it is allowed, if it is, then it allows the request to go on to the appropriate software (such as Apache). It uses some extra resources since the server still has to deal with the request, but far far less than what Apache uses via htaccess rule. In addition, some simple firewalls are available on the server and running already, however the user just doesn’t know how to configure it (this is where SSH and online tutorials come in). You should find out from whoever is hosting your server whether there is some sort of firewall program running or is available (if not ask if they can install one).
A simple one that many linux servers run is iptables (a block list). I was using that once my htaccess list got too large from spammers (a large htaccess list will slow down the speed of your website) but was not protecting me from flooding. I started having a lot of flooding (denial of service attacks) from new ip address so I looked for a better way. The host I have uses cpanel/whm so I asked them to install Configserver Security and Firewall (CSF is free). It is menu based and fairly easy to use, as well as it has proactive features like automated temporary bans for DOS attacks (X number in Y seconds means an automatic IP ban for Z amount of time) and can block IPs based on 3rd party blacklists.
I have my firewall just drop the packet (if the ip is being blocked) and not send anything back to the sender. This means there is no official page that they see and uses the least amount of processing power. So I end up using both firewall and htaccess. The firewall blocks the DOS attacks and certain other bad things, while the htaccess does the polite redirect to a page with contact information on it.
My site hasn’t gone offline from a DOS attack since.
The following ranges:
31.168.0.0-31.168.255.255
62.219.0.0 – 62.219.255.255 (or 62.19.x.x)
79.176.0.0 – 79.183.255.255
80.74.96.0 – 80.74.111.255
81.218.0.0 – 81.218.255.255 (or 81.218.x.x)
82.80.0.0 – 82.81.255.255
84.108.0.0 – 82.111.255.255
85.114.96 – 85.114.103.255
85.130.128.0 – 85.85.130.255.255
91.192.200.0 – 91.192.203.255
109.64.0.0 – 109.67.255.255
Many small blocks of addresses between:
192.114.11.0 and 192.118.255.255
but with a couple of “holes” that might have addresses belonging to someone else.
212.25.64.0 – 212.25.127.255.255
212.179.0.0 – 212.179.255.255
Should block their networks.
Stilgar thanks for the comment on CSF. Sounds very useful.
oops,
212.25.127.255.255 should be 212.25.127.255
Converting ranges to CIDR:
31.168.0.0/16
62.219.0.0/16
79.176.0.0/13
80.74.96.0/20
81.218.0.0/16
82.80.0.0/15
84.108.0.0/14
85.114.96.0/21
85.130.128.0/17
91.192.200.0/22
109.64.0.0/14
192.114.11.0/24
192.114.12.0/22
192.114.16.0/20
192.114.32.0/19
192.114.64.0/18
192.114.128.0/17
192.115.0.0/16
192.116.0.0/15
192.118.0.0/16
212.25.64.0/18
212.179.0.0/16
if you have something that runs Ubuntu or Debian Linux (or Knoppix or a variant of any of those)
apt-get ipcalc
might be a friendly thing to have.
George–
I now do
deny from 31.168.0.0/16
deny from 62.219.0.0/16
deny from 79.176.0.0/13
deny from 80.74.96.0/20
deny from 81.218.0.0/16
deny from 82.80.0.0/15
deny from 84.108.0.0/14
deny from 85.114.96.0/21
deny from 85.130.128.0/17
deny from 91.192.200.0/22
deny from 109.64.0.0/14
deny from 192.114.11.0/24
deny from 192.114.12.0/22
deny from 192.114.16.0/20
deny from 192.114.32.0/19
deny from 192.114.64.0/18
deny from 192.114.128.0/17
deny from 192.115.0.0/16
deny from 192.116.0.0/15
deny from 192.118.0.0/16
deny from 212.25.64.0/18
deny from 212.179.0.0/16
I’ll get a calculator tool! Many must exist– in fact, an online reverse tool would be useful. This is a GREAT help. (Of course, proof is in the puding. But I hope to see all these guys 403’d now!)
You can also consider the CloudFlare.com free level of service, as they both filter bots and cache the web site. It requires defining your DNS records in their system (“A” records, “CNAME” records, “MX” record), and then pointing your domain at their DNS servers.
Some tinkering with your log is needed, in order to log the original IP address instead of CloudFlare’s servers. There is a WordPress plugin to do that, if that happens to be what you’re using.
AnonyMoose—
It looks nice… but…. I’ve been clicking and I can’t tell what level of service is permitted. Bandwidth? Memory? CPU?
These advertizing pages tend to be all fluff! I guess I’ll try to find out tomorrow. (Moving is a pita. Might be hard to believe, but it sort of is. At a minimum, one is down a while while pointers resolve. )
The request from 109.66.7.15 is a normal GET to your top level. If it only happened with 109.66.7.15, then there is something else going on under the hood. I would expect this error show up a lot since Get http://rankexploits/musings is a normal request to access you site.
.
Are there a lot of errors like the one generated at [Tue Dec 20 06:22:57 2011]? Not just from 109.66.7.15 but lots of other IP’s as well?
.
If it only gives this error for 109.66.7.15, is there one (more) requests from this IP (as an IP not as bzg-) before this occurrence?
.
By the way – did you have the user-agent – Java block in place before or after this?
The multiple re-direct error could come from you making modifications to the .htacess/zblock. Was this a time you mucking around with it?
Lucia,
“Bandwidth? Memory? CPU?” There is no memory nor CPU. It is a content delivery network, not a web hosting service. That’s why you give it control over your DNS service — others get told that your IP is that of a nearby CloudFlare server (which gives them a cached copy of your page), while CloudFlare uses the DNS info which you defined to access your actual web server. CloudFlare tries to filter out bots. If you get a sudden burst of traffic, their cache reduces your load (you can also turn off caching for servers, such as if you have a test server).
Because all your traffic arrives through CloudFlare, all the IP addresses seen by your server suddenly become CloudFlare IP addresses. The users’ actual IP address is delivered in a new header message, so some sites need to configure logging and reporting tools to use the other field (it’s a standard field — caching tools have been around for some time). The WordPress plugin makes the needed adjustment for your log.
You can actually go through the signup and configuration options for the free service, and see just what the configuration options are. CloudFlare simply won’t be doing anything for you until the last operational step, where you use your domain provider’s tools to change your name server configuration to use the CloudFlare DNS servers instead of your current DNS provider. CloudFlare does offer some premium services for a fee, but they offer their filtering and caching free.
AnonyMoose–
Ok. That sounds pretty cool. I’d assumed the vaguess on the memory, cpu etc was because so many hosting services are also vague. But… pretty cool. I’ll figure out if that will work for me. On the knitting side, I have a blog I never contribute to anymore, plus I have static content. So… I’ll have to see how that interacts with cloudflare too.
Static content is fine for CF, because it can be cached just fine. And you can tell CF which subdomains should be cached and which should not be cached. But CF knows that dynamic content exists, and dynamic content should be updated fairly soon inside CF — CF says that your web server might see a significant reduction in bandwidth, but much of the legitimate traffic still tickles your server and triggers updates inside CF.
AnonyMoose–
So far, this looks pretty good. There were some things I needed to do to keep ZBblock functionality– which I wanted. At least a few people got bounced today because I couldn’t quite be sure I’d gotten the script that substitutes the underlying IP for cloudfronts before things pass through ZBblock, but I think most the kinks should be out now. I’m not sure I’m blocking everyone I used to block… but that may be ok.
I received a “403 Forbidden” message when I tried to visit your site yesterday. Fortunately only temporary.
Since as far as I know, none of the reasons given apply to me, I don’t know why I should have been received this.