Category Archives: Uncategorized

Some people getting blocked (sometimes)

The spam filter is blocking spam, but also some people. The stated reason is “invalid token”.
The filter (WP Bruiser) “saves” the blocked comments for 30 days, but it puts them in a special purpose report and I don’t seem to be able to move them to comments. I also seem to only be able to see recently blocked comments. So I can’t find everyone who has been blocked. 🙁

I have whitelisted ranges of two people who have been blocked: Tom Scharf and SteveF.

If you think you have been blocked, let me know. I can find your typical IP ranges and whitelist them.

Open thread. 🙂

Update: I found another page in the WP-Bruiser reports. All of the following were blocked at least once: SteveF, Mark Bofill, Tom Scharf, Dewitt Payne, Russel Klier. Possibly more. It wasn’t personal. You aren’t banned or intentionally moderated. It’s all for “invalid token”. I’ve googled, I have no idea what that is. I’ve white listed the 0/24 ranges around your IP. That should fix things for you. (The filter is blocking lots of obnoxious spam too by the way.)

Have you heard textbooks are expensive?

I periodically look for used version of books students I’m tutoring are assigned. I’m very careful to not buy those listed at ridiculous prices. From time to time, I see a price that makes my eyes pop.
etkaina

Are the two versions of “College Physics” by Gentile, Van Heuvelen and Etkina available at prices “from @1,144.34” printed on gold foil?! Do they contain all the answers to the 2017 AP Physics exam? What’s going on?

(Yes. Those are rhetorical and I’m pretty sure the answers are “No”. “No”. And “programming error”.)

For those wondering: you can get the AP version of this text book at a lower price. The publishers price is $213.96. You can also find listings on Amazon for less although I haven’t found any for $5 or less; prices that low are generally easy to find once the new edition is published.

Fortunately none of the students I’m tutoring use this book. It’s just sometimes mentioned as “the” book for the AP. (I doubt if any book is “the” book.)

I thought you might enjoy seeing that heart warming stopping price. Always compare prices on Amazon. Because otherwise…oy!

Meanwhile, the number of comments on previous posts is long. Open Thread.

Merry Christmas 2015!

Merry Christmas to all of any faith from Christian to Atheist (like me!) We did the whole Swedish-American Smörgåsbord thing last night.

We exchanged token cheapo-gifts despite the standing “no presents” rule which is regularly violated. By cheapo, I mean really cheapo. I found ties at the Salvation army. $3 for never used silk ties in original wrappers. The guys had to pull at random out of a grab bag; they seemed to like them well enough. Jim and Robert swapped the red one and taupe one– Jim preferred the red, Robert preferred the taupe. (Robert actually raved over the taupe. )

I gauge my success in finding good ties equaled what it would have been in normal stores. (How does a woman pick a good tie anyway? I knew Jim would prefer the red. But the other two? Didn’t have a clue. When confronted with the numerous $3 ties I decided the criteria were: Silk, ‘conservative’ meaning striped, small ‘dot/square/paisley’. No wild abstracts, ducks, anime cartoons, animals etc. )

This years gifts were almost as good as the $120 totally unused Braun espresso maker in the original package I found for $5 two years ago. When Robert unwrapped that, he was squatting on the floor. He fell over…. He was sort of upset that I’d spent so much when we agreed “no presents” and they’d stuck to it. David, the more calm of the two went into the kitchen and began using it. Then Jim told Robert how little I paid for it, and David and I heard Robert shout, “Lucia, you rule!

This year guys made espresso which they served with cookies. (I made the cookies.)

For those wondering, I now swoop through the Salvation Army once a month when I deposit checks and meet Jim for lunch at Panera which is very close by. Sometimes I find next years cheapo present; often not.

Today: German puff-pancakes, turkey today. Hope you all enjoy yourselves!

Reduce image scraping to prevent blog crashing and thwarth copyright trolls.

As some readers recall, I’ve been beavering away at reducing the ridiculous server load caused by various bots making constant heavy requests on the blog. The requests included excessive requests for blog post and for images. I am particularly sensitive to the image issue because I received a “Getty Demand Letter”. While I believe that the Ninth Circuit courts ruling in “Perfect 10 v. Amazon”, applies and hotlinking is not a copyright violation under us copyright law, I also think that many people would be well advised to interfere with image scraping at their blogs. Though interfering with image scraping will neither protect you from a copyright suit if you are violating copyright nor eliminate the potential that a copyright troll will incorrectly come to believe you have stepped on their copyright, it has the potential increase the cost of operation of entities like Picscout, (owned by Getty Images) Tineye (owned by Idee) and so reduce the likelihood that someone like Getty Images, Masterfile, Corbis, Hawaiian Art Network (i.e. HAN), Imageline or even the now defunct RightHaven will show up demanding money in exchange for their promise not to sue. (For more on these pests see ExtortionLetterInfo.)

More importantly: preventing image scraping will reduce your hosting costs and the frequency that your blog crashes as a result of some entity requesting zillions of images in a very short period of time.

For those uninterested in this topic: Comment will be treated as an open thread. But for those who might want to know how to reduce scraping now, I’ll post some .htacess code you can use.

————-

The heart of my scheme to prevent image scraping is a series of blocks in .htaccess which divert certain image requests to a .php script. Those reading will see that all blocks terminate with:

RewriteRule .*\.(jpe?g|png)$ http://mydomain.com/imageXXX.php?uri=%{REQUEST_URI} [L]

This command takes all requests ending with .jpg, .png or .jpeg and sends them to a script located at http://mydomain.com/imageXXX.php. It also tacks on the uri of the image requested. (Note, I do not filter access to .gifs. )

Those who have not yet written a http://mydomain.com/imageXXX.php can simply forbid access to these images by changing this rule to

RewriteRule .*\.(jpe?g|png)$ - [F]

The ‘- [F]’ forbids access rather than sending the requests through a filter.

I use the more complicated command because I want to log, filter and sometimes permit access. But if you have not yet written a script to log or filter and are noticing massive images scraping, ‘- [F]’ is a wise course. ( FWIW: Initially, I did use ‘- [F]’. Though I did not take data, it seems to me that many bots just kept requesting images after being forbidden. In contrast, as soon as I began diverting to a .php file many bots vanished the moment they were diverted. The ‘YellowImage.jpg’ experiment and a few others were rather enlightening in this regard.)

What does imageXXX.php do?
As I mentioned: initially you can just forbid access to certain requests. But diverting to a file ultimately works better. Since I am diverting not forbidding, the .php file does this:

  1. Because I am using cloudflare, it pulls out the originating IP and country code.
  2. Logs the request to an 15 minute image log file and to a daily image log file.
    • A cronjob using different script (called ‘checkfornasties’, checks the 15 minute log files and, among other things, counts the number of hits from a non-whitelisted IPs and counts the number of user-agents used by that IP during the 15 minute span. If either is excessive, that IP is banned at Cloudflare. For those at universities or in IT worrying that they will log on with their pc or mac and then turn around and use the browser on their workstation: My theory is Julio did just that yesterday when I was implementing the excess hits script. I decided using two user-agents in 15 minutes is not excessive. 🙂
      This cron job also checks for known nasty or image stripping referrers and user-agents (i.e. uas), and bans requests using those.
    • I manually scan through the image log file from time to time to see determine whether I should tweak the .htaccess file.
    • (I also manually scan through raw server logs, but this has nothing to do with my php file.)
  3. Runs the request through a series of checks to decide whether it should serve the image as I sometimes wish to do. (Coding the image script to sometimes serve files is absolutely necessary if you use Cloudflare. Those who helped by answering questions about the “Yellow” and “Lavender” images, thanks! You contributed mightily to this. Especially the one who used the really unusual user agent!)
    • Images with referrers on a whitelist are given an ‘ok’ and will be served the image if they survive the final step. Lots and lots and lots of you have been served images that pass through the script with no untoward effects. (Owing to screwups, some of you did briefly experience untoward effects when you tried to look at “YellowImage.jpg”. By the way: I will not be listing my white list. 🙂 )
    • Recent images are given an ‘ok’ and will be served if the request survives later steps.
    • All requests whether they are given an ‘ok’ or ‘not ok’ are sent through ZBblock which will block a lot of nasty things locally and either shows people ‘the scary message’ or a ‘503’. The ‘503’ is a lie– the server is not down. It just saves processing time relative to delivering ‘the scary message’ and it puts the request in a local black list and blocks that IP from further connections. (ZBblock blocked Anteros this morning; he emailed me, I fixed that issue which was likely blocking all sorts of people on a large ISP service in a particular part of the world. I cleared all IPs out of the local blacklist. )

Note: If a blog is hosted on Cloudflare, you must create a rule to prevent Cloudflare from caching any requests to ./imageXXX.php. (Some of you recall the experiment with the yellow and lavender images. I did this when I couldn’t figure out what the heck was going on. It turns out Cloudflare caches things and if a user at time 0 was sent to “imageXXX.php”, all other users in the geographic vicinity of that user were sent the same image! )

So, basically: imageXXX.php logs all requests. It sometimes serves the image. It sometimes bans you locally. And if you request an ridiculous number of images in a very short amount of time and start changing your user agent to experiment to discover whether the reason you can’t see images is your user agent, you will be banned at cloudflare.

Is imageXXX.php available to others?
Not yet. It will be eventually. Now that I know the method works, I need to organize this so people with IT skills even lower than mine can easily use it without my needing to provide a tutorial for each and every person who wants to give it a whirl. (People with great IT skills probably don’t even need my program!)

The next question people are likely to have is: Which requests get sent to this file? There are three basic ways to get sent to the file. Because I suck at .htaccess, I wrote three separate block of code (and Brandon, Kan or anyone who can tell me how to make these shorter, please do. I know it can be done, but in the first phase, I was hunting for ‘effective’ not ‘cpu-efficient’.)

The three blocks of code are described below:
Block I:

# bad image referrers
RewriteCond %{HTTP_REFERER} ^http://(.+\.)?mydomain\.com/$ [nc,or]
RewriteCond %{HTTP_REFERER} ^http://(.+\.)?mydomain\.com$ [nc,or]
RewriteCond %{HTTP_REFERER} ^http://(.+\.)?mydomain\.com/musings$ [nc,or]
RewriteCond %{HTTP_REFERER} ^http://(.+\.)?mydomain\.com/musings/$ [nc,or]
RewriteCond %{HTTP_REFERER} index.php$ [nc,or]
RewriteCond %{HTTP_REFERER} ^feed [or]
RewriteCond %{HTTP_REFERER} ^$
RewriteCond %{REQUEST_URI} /wp-content/uploads/
RewriteCond %{REQUEST_URI} !(2011/12|/2012/01|/2012/02|2012/04)
RewriteRule .*\.(jpe?g|png)$ http://mydomain.com/imageXXX.php?uri=%{REQUEST_URI} [L]

Motivation: Way back in December, when I first began working on getting the hammering of the site to stop, I noticed that image scrapers were persistently trying to load every single images hosted at this site back to 2007. Requests came in at a rate faster than 1/second, from a range of IPs (including some really weird ones like a prison system in Canada). Many, many, many of these requests with referrers that were clearly missing, probably fake or even worse certainly fake. For example: some came from referrers at the top of my domain (i.e. http://mydomain.com/ or http://www.mydomain.com/ ) with no blog post listed. This was very odd because if you hack back to the top of my domain (http://mydomain.com/) you will see the index page has no images; any image request with presenting that referrer is faking the referrer. The command up to “RewriteCond %{HTTP_REFERER} ^http://(.+\.)?rankexploits\.com/imageDiversion.php [nc,or]” all relate to fake referrers. The command containing “RewriteCond %{HTTP_REFERER} ^$” points to a request with no referrer (which can be legitimate). In contrast, a request for an image from a referrer containing “feed” is probably fake. Any requests for an older image contained in my ‘/wp-content/uploads/’ directory with those referrers needs to be logged and scrutinized.

As discussed above, the script is coded to sometimes send images. In the case above, the algorithm never sends the image. That means that if you request an older image from any site that includes the word ‘feed’ in the url, you will not be provided the image. (I may tweak this if I see legitimate requests in the logs. )

Initially, I thought Block I would be enough to handle my problem. In fact, initially, I thought just blocking requests containing ‘feed’ in the referrer would be enough. But it seems to me that as I began to block, new “methods” were being developed in parallel.

I began to to notice ridiculous attempts accesses zillions of images from a variety of IPs from weird places (like a prison system in Canada I kid you not). I also noticed these often had unusual user agents like “traumaCadX” which is used to process X-rays. A request using this useragent came from an IP that seemed to correspond to a group specializing in providing hotspots to airport and it was scraping images. Weird.

These requests were sending referrers that many would not wish to block. Some contained “google” or other search agent strings. To catch everything using “weird” referrers, I added another block:

Block II
# catch known image user agents and google through imageXXX.
# note: this does catch the google image bot.

RewriteCond %{HTTP_USER_AGENT} (image|pics|pict|copy|NSPlayer|vlc/|picgrabber|psbot|spider|playstation|traumaCadX|brandwatch.net|search|CoverScout|RGAnalytics|Digimarc|CoverScout|psbot|java|getty|cydral|tineye|clipish|Chilkat|web|Webinator|panscient|CCBot|Phantom|sniffer|Acoon|Copyright|ahrefs|picgrabber)[nc]
RewriteCond %{REQUEST_URI} /wp-content/uploads/
RewriteRule .*\.(jpe?g|png)$ http://mydomain.com/imageXXX.php?uri=%{REQUEST_URI} [L]

Note that I use [nc] in the list of user agents to block.

This is because various companies capitalization conventions seem to change over time. Letter series like “image”, “pics”, “pict” appear in various images scrapers like ‘picscout’, ‘picsearch’, and ‘pictobot’. Search also often appears in various bots– but luckily not the google bot, bing bot or any that I want to permit to visit. Some of these appear in mystery bots whose documentation I could not find.

Also, I am currently not too concerned about efficiency. I have not doubled checked to edit to eliminate “Webinator” on the groups that its already covered by “web”. The reason for this is that I continue to manually check the logs to determine whether a short version might be over inclusive. If it is, I don’t want to forget to retain “Webinator”.

Requests from these user agents to through the imageXXX.php. Sometimes these are served images– which is important because the appearance of ‘image’ in the list means the Googlebot-Image requests do go through the script. I can’t remove “images” from the .htaccess blockII because the scrapers can fake user agents, some do try to pass as “Googlebot-Image” when scraping. (Fortunately, ZBblock will catch some of those. My script that checks ridiculous numbers of requests in a 15 minute time window also catches some of those. Also: It does not necessarily send Googlebot-Image images. )

Unfortunately, the two previous blocks aren’t enough. Bots can spoof user agents and spoof referrers. What if a bot tells me it’s using “Mozilla something or other” and comes from “http://joesblog.com” requesting an old image. Did I start to see this? Yes I did. I don’t mind if the requests are for new images or if I can verify that they are from a blog that does link me. So, I added this:

Block III
# catch known almost anything looking for an old image imageXXX.
RewriteCond %{REQUEST_URI} /wp-content/uploads/
RewriteCond %{REQUEST_URI} !(2011/12|/2012/01|/2012/02|2012/04)
RewriteCond %{HTTP_REFERER} !(Whitelisted_domains|whitelisted_blog_posts) [nc]
RewriteRule .*\.(jpe?g|png)$ http://mydomain.com/imageXXX.php?uri=%{REQUEST_URI} [L]

This block sends requests for all old images through the script unless those requests come from blog posts I have verified request my images. I can manually verify that a blog post links my images and add them to the “RewriteCond %{HTTP_REFERER} !(Whitelisted_domains|whitelisted_blog_posts)” list. (Should I notice scrapers trying to take advantage of this whitelist, I can eliminate that line and send the blogger a note and request they copy and host my images themselves. But for now, whitelisting some blog posts that send a lot of traffic saves cpu.) Also, I need to edit ‘RewriteCond %{REQUEST_URI} !(2011/12|/2012/01|/2012/02|2012/04)’ from time to time because when 2013 rolls around, images containing “2011/12” in the url will be old.

Once again: the imageXXX.php sometimes just sends the image. In fact, the referrer whitelist inside imageXXX.php is much wider than the one in .htaccess. Many of the requests that are diverted by this block are shown the image. But because these are logged, I can catch image scrapers trying to race through images rather quickly.

Questions
For those that read this far, I’d like to know whether you can immediately see a huge hole in the strategy that a person who works at a company that makes money by scraping images and who is very strongly motivated to scrape might exploit. I tried to think of some– but it’s always better to ask people. Also, if you can tell me how to rewrite block one to eliminate at least 2 lines, let me know. And also, if there is something obviously stupid about having three block in .htaccess, let me know that– and tell me the solution!

Meanwhile, for others: Open thread. And warning: You will be seeing what I do to get rid of the cracker-bots, and referrer spammers! I’m doing this precisely to get feedback from the IT people who visit and know more than I do. But I can report: cracker ‘bots and referrer spam is way down. (And since the rate of both is entirely independent of rates of real visits, this is not merely because traffic is down due to light blogging.)

Happy Christmas! Illustration by Josh.


Editing Comments: Turning on Plugins

As some visitors know, due to frequent server errors I turned off many non-spam related plugins several weeks ago. This meant visitors could no longer edit comments, have responses to their comments emailed to them, archives on the sidebar vanished, the official “contact lucia” form went away and various other nice functions were lost. I’m still getting some server errors, but the number has decreased. So, I’m turning plugins back on.

During the process of reactivating plugins, I’ve hunted down more recent plugins that might do things a bit more efficiently (or not) and which do things people might find handy. Yesterday, I activated some plugins whose functionality you might notice including:

  1. “After the deadline for comments” which, should you choose to use it, will proof-read your comment for you. To use it, after typing your comment, look for the “ABC” icon with a green check icon below a comment near the “submit” button. Click, read the suggestions. Make whatever changes you wish to make.
  2. Auto-Close Comments, Pingbacks and Trackbacks. This automatically closes comment threads after a predetermined time. I like this because I notice thread open too long nearly always degenerate into food fights. I’ll be setting this to 7 days. (I can re-open.)
  3. Editable Comments which permits you to edit or delete your comments. It took a bit of hunting and fiddling to get this to work out of the box. The major issue is that line 144 in the main plugin file pulls time out of the WordPress database using $comment->comment_date. This date was later compared to time determined using a PHP call to time(). All comments always looked at least 7 hours old. Not good.
     
    Wordpress conveniently stores both GMT and local time, so I edited to pull out $comment->comment_date_gmt. That seems to fix the issue. (I left a comment at the developer’s blog.)
     
    I’ve now given you 5 minutes to edit and 1 minute to delete. If you leave a comment, you should see both buttons when your comment appears. If, using the same IP address, you reload in after 1 minute, you should see the edit button only. If, using the same IP address, you return after 6 minutes, you should see nothing. Both buttons seem to work for me. Feel free to test and tell me how it works.
     
    I also tweaked the plugin authors recommendation’s for loading the plugin. While I would like people to have time to edit, I also don’t like the oddity of people changing comments after someone else has responded to them. This occasionally happens (though not often). To minimize this weirdness, my comment.php file counts the total number of comments to display, and then will only display the “edit” and “delete” buttons if your fewer than 2 comments have been added after you added your comment. I did this by wrapping the commands that insert the “edit” and “delete” buttons in an “if” statement like this:
     
    if($totalNumComments-2 <= $comment_number ){
    if ( class_exists( 'WPEditableComments' ) ) { WPEditableComments::edit('Edit'); echo " ";}
    if ( class_exists( 'WPEditableComments' ) ) { WPEditableComments::delete('Delete'); }
    }

     
    I’m tempted to extend the conditional to only show “edit” and “delete” buttons to people whose comments were approved. This won’t affect people who aren’t moderated.

 
I think that’s the full number of “visitor” side plugins I’ve activated.

Meanwhile, I’m sure many of you have noticed I’m posting less frequently. This is due to … dieting. I have reached the phase where I am trying all sorts of new recipes which, in principle, is not that time-consuming, but in practice means I keep going to the library, getting cookbooks, flipping through them and trying out new ones. I’ve been posting the ones that taste good (to me) at my former knitting (now mostly recipe) blog. I mostly don’t post the failures, but I’m thinking of doing posting those because almost no one reads that blog anymore and it helps me keep track of what works. (For those on diets, I also estimate the calorie count, glycemic index and other nutritional information using an online nutrition estimator.)

Now, let me close with this: Like other non-climate threads, this is an open thread. Do tell me whether the edit/delete buttons are working for you, and whether it seems to display for the correct amount of time. Do tell me if you notice more than the usual number of server errors (I’ve got caching off right now. I’ll be reactivating this afternoon.) Do tell me if you like the suggestions from the proof-reader. Feel free to discuss Ice (I need to get some bet results up.) GISS and Hadley should be posting temperatures soon… So, there should be a few more blog posts coming up soon.

More Blogging Issues: Trying something today.

Today, I’m going to try something that may reduce the memory problems– but it may also make things worse temporarily. I am going to set “SuperCache” to create a preload cache of every blog post. This could reduce CPU and memory by eventually make the blog deliver static pages when people (and mostly ‘bots and crawlers) hitting the older posts. I’ve got comments closed on all those anyway. It’s a bit wasteful to be using any CPU or memory to run php to deliver those pages to people.

Anyway: The downside, this process may itself is CPU intensive and so may also be memory intensive. So, things could be much worse today. Upside, once every single post is cached, both CPU and memory use may drop.

You can read about another bloggers experience here. The conclusion of the blogger (who also writes lots of nice plugins) is:


See that nice dip in the graph for this week? I started to preload the cache used by WP Super Cache last Sunday and it’s made a noticeable difference in the load on my server here. The big spike is the preloading process.

There should be no issues with stored cache files. That’s one thing Dreamhost is very generous on. But, if you have trouble commenting today, it may be due to the pre-load plugin operating.

I’ll also be exploring paged comments plugins. But I suspect running php to run a script to create a page for all the ‘bots may be a larger issue.

Blogging Issues: Server Errors

Frequent commenters have noticed that the blog is throwing a lot of server errors lately. I have not been able to identify the precise cause. Communicating with Dreamhost, it appears that something launches a process that loads a large amount of memory. This causes Dreamhost to start killing processes to keep my from taking down shared hosting, which is fair.

Unfortunately for me, the error logs show which processes were interrupted, but these are not necessarily the process that sucked up too much memory. In any case, the process that is interrupted tends to be the version of php being run. This is pretty unenlightening, since the problem almost certainly lies with a particular plugin not php overall.

Since I have no particular skills in identifying the problem, I’ve been turning off plugins. Some of you will notice that editing functionality for comments has been turned off. Some other plugins are also turned off, but most of you will not notice. (The plugin that insert ads is off. A plugins that makes the archives look pretty is off. The plugin that caches things to reduce CPU was turned off last night, and turned back on. I thought it might suck memory while sparing CPU. Well, turning it off definitely makes the blog use too much CPU– around 6 am server errors were nearly constant.)

Dreamhost staff suggested merely having unused plugins in the plugin folder can soak up memory, so I moved nearly all unused plugins out of the folder. (I had a shockingly large number of plugins in there.) If you experience no server errors today, we’ll have found the problem. 🙂

In the meantime, if any of you happen to have any expertise in optimizing WP installations, let me know. Thanks in advance.

Cherry Sorbet for a Hot Summer Day.

It’s been a long time since I posted a recipe. In comments, I mentioned I was making sorbet and I thought I’d share.

I’ve also mentioned I’m on a diet. I’m trying to lose 1 lb a week which means I need a 500 calorie deficit a day. So, why the heck am I making sorbet? Well, it’s diet sorbet! (Actually, I’m not sure if it’s sorbet because I add a little cream. But in the US the legal designation for “sherbet” requires 1% -2% dairy; so it’s not sherbet. I’m not selling this, so… whatever. )

Here’s how I came to be making ice cream: Way back in grad school, Jim and I owned an ice cream maker. We often made sorbet– normal sugar loaded sorbet. It’s easy to make and, because there is no time for ice crystals to form, it’s better than most sorbet you can buy. Eventually, as with many home food making fads, we bored of making sorbet and gave the machine to Goodwill. Years passed.

Now, I’m on a diet. It’s summer; it was getting hot. I discovered several things:

  • I can buy erythritol based sugar substitutes at Whole Foods under the brand “Sweet Simplicity”. Erythritol is sweet and has the necessary bulk to act as a fairly decent substitute for sugar in a sorbet recipe without degrading the mouth-feel of the sorbet. Aspartame, sucralose, stevia, and saccharine will make sorbet sweet, but the won’t contribute anything positive to the oozy-gooey mouth feel you like as ice cream, sherbet or sorbet dissolves in your mouth. Stevia and saccharin also taste icky. (You can find brands of erythritol at Amazon; see erythritol.)
  • I discovered thickener called “konjac flour”, which is also glucomannan — a diet supplement (which may or may not work). It definitely works as a thickener, thickening even without heating. It happens to be an ingredient used to make turkish ice cream. I don’t use it the way the turks use it, but adding a little konjac makes low fat ice cream or sorbet a little creamier than if you don’t add the konjac. Obviously, adding lots cream does the same thing. 🙂 To learn about it’s use in ice cream read: 1, 2 and 3. (I bought my konjac flour mail order. To find suppliers, go to Amazon: glucomannan. )
  • I found a used 1 1/2 quart Cuisinart ice cream machine on craigslist; I paid $15. (Naturally, Amazon has ice cream makers. The ice cream maker I bought is currently on sale for roughly $48.50 plus shipping. )
  • I googled and found a bunch of sorbet recipes. I also read a fair number of articles on how different ingredients affect the consistency of ice cream. Then, I decided to experiment. I’ve varied amounts of and types of thickeners, amounts of fruit and use of different sweeteners. For some reason, using gelatin and some konjac together seems to give a better mouth fell than than using gelatin or konjac alone. Gelatin alone is pretty good though, so if you can’t get the konjac, or think it’s too weird for words, leave it out. Using the erythritol is also better than using aspartame or sucralose. Adding the 1 T of cream also makes the sorbet nicer.

Here’s my current “standard” recipe for fruit sorbet for two. The sorbet pictured is cherry. (This makes lots of ice cream for 2.) To make the ice cream, you will need a blender, measuring spoons, an ice cream maker and measuring spoons.

Recipe

Serves two people who want to make hogs of themselves of 4 who consume more moderately.

  • 3/4 c water total.
  • 1/2 package of unflavored gelatin (e.g. Knox brand.)
  • 1/3 – 1/2 cup Erythritol sugar substitute.
  • 9 oz. unsweetened frozen fruit. (This 2/3s of the 12 oz bags stocked at my grocery store. )
  • 1 T heavy cream.
  • 1/4 t konjac powder. (Optional, but it does make the sorbet a little creamier.)

Have your ice cream maker prepared to make ice cream. (With my maker, this means freeze the freezer bowl for 12 hours to make sure it’s cold. Place it on the machine stand and insert the paddle.)

Pour 1/4 C water into a microwave safe measuring cup; sprinkle gelatin intowater and let sit for 5 minutes. Add remaining 1/2 cup water and sugar substitute, stir to blend. Place in microwave and heat until boiling; stir to make sure all sugar substitute is dissolved.

Meanwhile, place frozen fruit and cream in a blender. When water mixture has boiled, let it sit for 1 minute, then pour into blender. Blend until fruit is pureed. While blender is running, slowly sprinkle— do not just dump– the konjac (i.e. glucomannan) onto fruit mixture. Let run another minute.

Pour the mixture into the ice cream maker bowl, and blend until firm. (Approximately 10-30 minute depending on how cold both the frozen fruit and freezer bowl were.)

Serve immediately or scoop into a covered container to firm further in the freezer. I find the texture is excellent if I allow to firm up to 1 hour. Experiments without the cream, glucomannan and gelatin indicate a serious degradation of quality if frozen 24 hours. (The sorbet becomes rock hard, and you can see ice crystals. Allowing it to stand on the counter and stirring can remedy this, but it’s an issue.) I haven’t had any “gelatine/glucomannan/cream” sorbet left over particular recipe left over to test for degradation of consistency after freezing 24 hours.

Nutrition

I entered the ingredients using frozen cherries at Nutrition Data to create a nutrition analysis based on dividing the recipe into 2 servings. The short story is 100 calories: 15 g carb * 3 g fat * 3 g protein. The medium story is given in the chart below. The long story involves even more charts, but I don’t know how to share my recipes on Nutrition Data.

If you have an ice cream maker, and can find the ingredients and are on a diet, give this a try. Unless your doctor has you on a very severe low calorie or low carb diet, or you have an aversion to sugar substitutes or weird ingredients, you’ll love this. If you do have an aversion to weird ingredients, use real sugar and leave out the konjac. That probably tastes great too– though the calorie count per serving will increase to roughly 300 calories all from sugar.

Family Business.

I’m off on family business and will be away until October 10. (I’m helping my ailing Dad in Sarasota fly to Albuquerque where my brother lives.)

I will probably be impossible for me to get onto the ‘net until I get back. So, the final results for UAH bets will be delayed. (Sorry!)

In the meantime, I’ve set comments on all posts to auto-close after 4 days. (I’ll pre-date an open thread to permit people to continue conversations, but I figure doing this will keep any thread from hitting some ridiculous number of comments.)

Wish me luck with my Dad. (I’ll need it!)