Thursday, October 14, 2010

HTML5: Up and Running

Author: Mark Pilgrim
Format: Paperback, 240 pages
Publisher: O'Reilly Media; 1 edition (August 25, 2010)
ISBN-10: 0596806027
ISBN-13: 978-0596806026

I became impatient with the history lesson in Chapter 1 and wanted to test drive HTML 5. What's different? What's new? Guess I'll have to work to find out. As the blurb I found at Amazon said of HTML5, It’s not one big thing. It's not a matter of learning a new markup language from scratch, which is both a good and bad thing. In fact, again to quote the author's blurb, “Upgrading” to HTML5 can be as simple as changing your doctype...In HTML5, there is only one doctype: !DOCTYPE html. That's encouraging, but just how easy is it to learn HTML5 and how easily can you learn it from Pilgrim's book? I went in search of the answers.

The first place I went was the book's Preface to see where I could find a link to the source code. I was pointed to the author's site Dive Into HTML5, which is the original book on which the book I'm reviewing is based, but it didn't have a clear cut link to anything called "source code". Maybe this is where It’s not one big thing comes back to bite me.

Chapter 2: Detecting HTML5 Features introduced me to Modernizr (yes, I spelled it right), which is a nifty JavaScript library that detects the HTML5 and CSS3 features your browser will support. It also creates a self-titled global JavaScript object containing the properties for each such feature. However, if your browser doesn't support certain HTML5 features, Modernizr won't fix it. But what about learning HTML5? We kind of got away from that.

Oh wait! Chapter 3: What Does It All Mean? helps. I found the link to a set of code examples which got me started. Then, as I progressed through the book and through the author's site, which runs in parallel and often in duplicate, I realized how the book was organized. This is no small feat, but maybe it was my expectations that made the task difficult. I was expecting a front-to-back guide to getting started with HTML5 and what I discovered was a collection of loose pieces in a box.

Learning HTML5 from Pilgrim's book is like putting together a jigsaw puzzle. When you first open the box, all you can see are a collection of jumbled pieces that, taken at a glance, don't make a lot of sense. If you had never encountered a jigsaw puzzle before, you might look, become confused as to what these pieces mean lying in such disarray, close the lid, and walk away looking for something more comprehensible.

One missing pieces of the puzzle, so to speak, is a knowledge for HTML4. Imagine the raw code of an HTML4 web page. Now imagine that you are presented with a list of tags and other markup elements you're not familiar with. What are you supposed to do with them? How do they work? What do they replace (if anything)? Using Chapter 3 on his web site for an example, I tried to navigate around until I could find something I could sink my teeth into.

Got a lesson on DOCTYPE, history lessons on the root and head elements, lots of other stuff to scan past, a section called A Long Digression Into How Browser Handle Unknown Elements, more stuff...more stuff...then it began to register. I started to hit spots on the pages that said stuff like, this is how we used to do things (add example of old code) and this is how you do it in HTML5 (add example of new code). The information is there, it's just not organized and called out the way I wanted it.

I went back to the book, compared it to the same pages on the author's site and "got" the organization. It may be a matter of how I think vs. how the author thinks, but from that point on, it was easier to tease what I wanted to know out of the book's pages.

I think HTML5 is fabulous but I'm not sure that HTML5: Up and Running is the best book to use as an introduction. It most definitely is not the best book to use for an introduction if you aren't familiar with HTML in general. I'd recommend navigating the author's website before buying the book. If you "get" the website, you'll "get" the book. They're pretty much the same thing.


Using GIMP: Kindle Edition

It bothered me that all of my other books were available through Amazon except this one. Amazon is a nice place to point folks when they want to get a quick idea of my professionally published works. Finally, Amazon released the Kindle Edition of Using GIMP (July 2010). Now all the Amazon page lacks are a few reviews (sigh).

Friday, August 27, 2010

Coming in November: MCTS: Microsoft SharePoint 2010 Configuration Study Guide

I hate keeping secrets, mainly because I'm no good at it, but when you sign an NDA with a publisher, you can't tell people what book you're writing until the publisher starts marketing it. Finally this one showed up at Amazon (I'm doing the copy edits now) so I can talk about it.

Need to know the ins and outs of SharePoint Server 2010? Considering taking (and passing) the Microsoft SharePoint Server 2010 Configuration (70-667) certification exam? That's why I wrote this book: MCTS: Microsoft SharePoint 2010 Configuration Study Guide (70-667).

Expect to see it available November 22, 2010 or pre-order now and avoid the rush.

It's Alive! Using GIMP is Now Available!

There were times when I thought I'd never see this day come, but my new book and my first eBook, Using GIMP is now available online.

I've never written a book like this before, so I'm a little nervous about how it'll be received. I'm sorry I don't have any "author's copies" to give out, but since everything is accessed online, QUE can't actually ship me any copies. Please give it a whirl and let me know what you think.

Friday, July 9, 2010

Review: Getting Started with Processing

Authors: Casey Reas & Ben Fry
Format: Paperback, 208 pages
Publisher: Make; 1st edition (June 17, 2010)
ISBN-10: 144937980X
ISBN-13: 978-1449379803

I return to the topic of "learning how to program" every now and again because I haven't found a truly painless way of teaching programming to people who aren't naturally wired for it. I don't know if Processing is the answer, but it sure seems to be in the running. It has the benefit of being an open source program written to appeal to graphic designers who need or want to learn programming. Let me explain.

I've reanimated my interest in drawing and graphics recently (a long story) and am doing most of my work in GIMP, with which I'm fairly familiar. GIMP has a lot of wonderful features and a few drawbacks. I've tried to augment with Inkscape, but I've got so many other projects going, it's hard to dedicate the time to really get familiar with Inkscape. Then I received an invitation to review Getting Started with Processing, written by the creators of the Processing program. I thought that was probably (hopefully) a good sign, so I jumped at it.

At only 208 pages, it seemed like this would be a quick read (and my stack of books to review is growing rapidly, so I need to work through a few). Quick reading, yes. Quick to get through, no. Not with the practice this requires. Processing is an interface that uses common programming syntax to create static, 3D, and animated graphics. It doesn't look like much when you install it, but the potential of Processing is amazing.

Installation though was the first of my concerns. If you have 32-bit Windows, it's probably your best bet, but the book said that trying to install Processing onto my 64-bit Windows 7 machine was chancy at best. Installing on Linux is fine if you are savvy enough to do the job manually and not with a package manager. While Processing is open source, you won't find it in the Ubuntu repositories so apt-get or aptitude aren't options. I only say this because more regular desktop users are gravitating to Ubuntu so the "average" Linux user may no longer be as comfortable in the shell. Oh, for the Mac users out there, there is an installation file for Processing that'll work for you.

In some ways, the basic process is fairly simple. Input the proper code into the main input pane, click Run and your graphic appears. Yowza! Just like that. There are plenty of exercises to try out in the book, but I really would have liked it if the authors would have made the location of the code samples for the book more explicit. I visited

You can find tutorials and code samples for Processing at but I assumed the code samples would be included on the tutorials page. My fault. Click the image of the book's cover on the site's main page and go to the books page to find the zip file containing the sample code. Of course, the site tutorials have a lot more examples of really spectacular work, so beyond the book, you can really have fun.

Yes, along with creating some really cool images, you will learn programming basics, or at least how to copy the examples of for loops and such that are presented. Also, having some basic idea of how web graphics work helps, particularly understanding RGB color, as you have to manually enter these values as part of the code.

I know Processing has been around for awhile but I would have appreciated a little more automation in the interface. It would be nice to click File -> Save As -> and save an image as a png or a tif, but it's a little more complicated than that. It's easier (for me, anyway) to export an image so that it can be uploaded to a web server than to create and save a simple static graphic.

There are plenty of graphics engines out there, including open source solutions, but nothing is truly intuitive and everything requires quite a bit of practice to gain proficiency. Progressing is the pretty much identical, but the advantage is that you also learn programming basics at the same time. If you have even a little bit of a background in programming and algebra, you're that much further ahead.

Comparing the book to the possibilities I discovered on the Processing site told me that the book only covers the basics. You won't be a Processing guru by page 206, but you will have the essentials of the language (which is very simple) and the interface, enough to make your own static and animated designs. The interface itself has examples (File -> Examples, and then choose the desired submenu), so you can see the code and the result of specific effects.

I'm probably not doing the book or the program sufficient justice in my review, and while most graphic designers will probably want to stick with PhotoShop and Illustrator (though they're hideously expensive), there's a lot to be learned and to be accomplished using Processor. If you don't believe me, go to the Processor exhibition page and see some impressive examples of work done exclusively in Processor.

Other value added pieces on the Processor site include a wiki and an active forum, so if you decide to take up Processor, you're certainly not alone.

Visit their site, explore the resources, get the book. With computer generated graphics and animations entering their mature stage in film and other venues, learning Processing could be a first step to a life long adventure. Enjoy.


Sunday, June 27, 2010

GIMP 2.6.9 Available for Linux..sort of

I was perusing the open source software related news this morning trying to wake up and came across an item at Tech Drive-in called Install New GIMP 2.6.9 in Ubuntu 10.04 Lucid Lynx. I know that GIMP 2.7.0 is the next stable version to be released and understood that it wasn't going to become available until the end of 2010 or the beginning of 2011. What's GIMP 2.6.9 have to offer?

According to the website:
It's been a while since the last release. Quite a few bug-fixes have piled up in the stable branch, so here's another release in the stable GIMP 2.6 series.
There are just a ton of bug fixes included in 2.6.9 for a wide variety of languages. The full details are at Should you be concerned? Probably not. I haven't found GIMP 2.4.6 to be in such a condition that I've been dying for an upgrade. Also, the option to use synaptic (and thus apt-get) to perform such an upgrade on my Ubuntu 8.04 LTS (Hardy Heron) computer doesn't exist (no, I haven't gotten around to upgrading to Lynx, yet).

Even for the Lucid Lynx, you have to jump through a few minor hoops to install GIMP 2.6.9 and unless you really need the bug fixes listed, it's probably not going to change your GIMP experience appreciably if at all.

Not sure if I could make the 2.6.9 upgrade option available for my current version of Ubuntu or if I should care. I know I could download the 2.6.9 tarball and install that way, but I prefer to use apt-get/synaptic to manage my applications.

I checked and for Windows using the Windows installer, the latest version of GIMP available for immediate download is 2.6.8...close, but no cigar.

The interesting news is that 2.6.9 was released on June 23rd, making it the first release of any version of GIMP since last December. Looking forward to GIMP 2.7.0 in another six months.

Monday, June 21, 2010

Hardware Cheat Sheet

Credit for posting this one page PC hardware "cheat sheet" on the web. According to the Geekologie about page, "Geekologie is a geek blog dedicated to the scientific study of gadgets, gizmos, and awesome. There are a lot of shiny new things out there, and Geekologie is dedicated to finding every last one of them for you."

The blog tends to lean towards things Star Wars, but has plenty of other interesting tidbits, photos and videos. Oh. You probably want to see the cheat sheet (since I've written a computer hardware related book or two, I'm naturally attracted to this sort of stuff). Here it is:

Computer Hardware Chart. After you click the link and the image loads, click the image to enlarge.

Thursday, June 10, 2010

Using GIMP: It's Getting Closer

Not sure exactly when the eBook will be released. All of the deliverables are in and most of the editing is done. Going over the PDF pages now to make a final check of everything, so the book isn't 100% quite yet. Just to give you a taste though, here's some samples from the front matter. Enjoy.


Wednesday, May 26, 2010

Chromium on Ubuntu 10.04 Slower than Firefox?

Maybe I'm being unfair. After all, I have been having networking problems with my Ubuntu 10.04 (Lucid Lynx) virtual machine running in VMware Workstation 7. Seems to be tied to a DNS problem. The VM doesn't pick up the DNS server addresses from the DHCP server on my network (though it gets an IP just fine). I thought the solution was to point to Google's free DNS servers. Worked for awhile, but then stopped. I tried using the DNS servers on my wee home server and DSL modem device and that worked for awhile too, and then stopped.

I finally tried my ISP's DNS servers and that seemed to take hold, though I don't know why. It shouldn't make a difference but I've been surfing from my Lucid Lynx VM for two days now without a hitch. That leads me to my second problem.

I'm a big fan of Firefox so naturally, use it on all my Linux and Windows machines. I've tried Google's Chrome web browser on a Windows VM and like it, so I decided to install Chromium on the Lucid VM. Seemed to go great guns at first but then hit major snags.

Actually, I was having problems in both Firefox and Chromium which all seemed to be tied to my general networking problems for the VM. For the past two days, I've been surfing the web just fine with Firefox, though. Time to try Chromium.

For a minute or two, it seemed great. I thought I could give Chromium a real workout and see if it really was faster than Firefox, just as the speed tests between Firefox and Chrome suggest when run on Windows. I've got some bad news.

Firefox continued to let me surf the web at lightning speed, but after a few minutes, Chromium slowed down to a wounded crawl. I tried Googling the problem but only came up with a slow video playback problem in Chromium on Ubuntu reported at

Admittedly, before I had the chance to investigate further, I had to leave for work, so I won't be able to make more tests until I get home tonight. I suspect the issue may still be tied to Ubuntu Lynx running as a VM and the associated networking problems, but I can't be sure. Until I get home and have a chance to look at this more completely, has anyone else had an experience with poor Chromium web browser performance in Ubuntu 10.04?


Saturday, May 22, 2010

Happy 30th Anniversary "The Empire Strikes Back"

In honor of the 30th anniversary of the release of the Star Wars film The Empire Strikes Back, I'm posting a photo of a star destroyer. You are probably saying "So what?" but this photo is special to me. I bought it at a Science Fiction convention in Los Angeles a full month or more before Star Wars (1977) was released. It was my first real look at the world George Lucas created. Enjoy.

Tuesday, May 11, 2010

Copiers have hard drives?

I don't think much about copying machines. I just make my copies, print what I need to print, fax what I need to fax, and scan what I need to scan. What I didn't realize is that since about 2002, commercial copying machines have been built with hard drives that store as images everything you have ever copied on your machine.

Now imagine you manage the copying machines for a medical center, insurance company, police department, or other organization that routinely prints and copies secure and confidential information. Imagine if some of the machines you manage are getting older and you replace them, surplusing the old machines for resale. Anyone who buys your old machines could potentially remove the hard drive and pull the data off of it; data that consists of all the documents ever processed through the machines. Am I being paranoid? Watch this:

Am I still being paranoid? Maybe your company should update its security plan to include how to dispose of copying machines. Just a thought.


Sunday, May 9, 2010

Ubuntu 10.04 Update: Networking

Not sure if this is an issue with the Lynx or with the fact that it's a VM (VMware Workstation 7). I've noticed over the past few days that networking's been spotty at best. Web pages take forever to load or the pages don't load and the connection times out. Same for twitter in twittergadget and Gwibber. Tried both the Firefox and Chromium browsers thinking it would make a difference but nada, tostada.

I'm aware that there's a wireless networking problem associated with the Lynx, but I'm using a strictly wired LAN with a DSL connection to hit the Internet. I can "refresh" the connection by rebooting the Ubuntu VM, but as of today, the connection only lasts a few minutes before it starts acting dodgy again.

Ubuntu itself is set to acquire a dynamic IP address and as a VM, it's network adapter uses NAT. The other VMs (Windows machines) aren't experiencing any similar problems. Also, my host PC and other computers on the LAN aren't having a problem. If it was just a Firefox issue, I'd suspect the problem listed at might be it, but not with Chromium showing the same behavior.

Any ideas?


Friday, May 7, 2010

Do Chimpanzees write The Great Gatsby?

I named my blog "A MillionChimpanzees" to illustrate the vast multitude of people on the web, all blogging about all things, great, small, and silly. It's as much a way to make fun of my own efforts as it is anyone else's. With that in mind, I read this comic this morning and absolutely had to share it as well as the link to the original source. Enjoy.


Monday, May 3, 2010

Ubuntu 10.04 LTS Lucid Lynx First Impressions

Yeah, I know. There are about a million blog articles of this nature floating around on the web, but what the heck. Blogging is all about freedom of speech and expression of opinions and ideas. Here's what I've got so far on the Lucid Lynx.

First off, I was amazed by the fact that I could quickly and easily download the ISO for the 32-bit desktop. It was only about 24 hours after the initial release when I gave it a shot, and there were no delays at all. I find torrents obnoxious, so I did the straight download directly from No muss, no fuss.

I joke that I don't try a new Windows desktop OS until the first service pack is released. That's usually pretty good advice, but even with Linux, I don't download and install a brand new release of a distro on my production machine. In this case, I ran the ISO directly in VMware Workstation 7 to give it a shot. I used the easy install option just for giggles. This bypasses the manual configuration for the OS which isn't always a good idea, but I figured the worst that could happen is that I'd experience a major fubar and have to blow away the VM.

Everything worked well. Installation was quick and the current version of VMware Workstation automatically installs VMware Tools for Linux, so it's an almost totally hands off experience. Then, when the GUI came up, I hit a snag. The mouse worked fine, but the keyboard was totally non-responsive. This could have been an Ubuntu issue, a VMware issue, or maybe wireless Dell keyboards just don't work and play well with Ubuntu. I fired up Google and started my search.

I found just about a ton of posts in different threads at the Ubuntu forums including this one and this one. They all give more or less the same advice about solutions, but I specifically referenced a thread dealing with Ubuntu 10.04 and VMware Player, which worked out for me just fine. After using the virtual keyboard option to enter my password, I was able to login and thereafter, my wireless keyboard behaved as expected.

I haven't had a lot of time to play with the Lucid Lynx VM as yet, but there were a few things I took care of right away. First, I installed Ubuntu Tweak, if for no other reason, than to be able to put a folder for my home directory on the desktop. It offers a lot of other great features as well, but it disappoints me that so many simple configuration options don't come with Ubuntu "off-the-rack".

There are a large number of "what to do after you install Ubuntu 10.04" blogs and tutorials around, and I chose the one featured at because it seemed to be reasonably comprehensive and wasn't afraid to use the apt-get system to tweak Ubuntu.

I didn't follow most of the steps in the tutorial, at least so far, but I did run sudo apt-get install ubuntu-restricted-extras to enable Adobe Flash Player, JRE with Firefox plug-ins, and a few other things. I might even get around to installing the Google Chrome browser just to try it out on Linux, but Firefox serves me for now.

Oh, and I installed GIMP, which was a breeze using the Ubuntu Software Centre. I'll post more details as I get the chance to do something more substantial with the Lynx.


Friday, April 30, 2010

The USING Series: More than Just a Book

If you are into technical reading or writing, you've probably at least heard of Among other publications, they're responsible for the Unleashed series, and their imprints include Cisco Press, IBM Press, Prentice Hall Professional, and QUE Publishing. While you may use books such as those published by Cisco Press without being overly concerned regarding the presence of a parent organization, you may also be unaware that changes are coming. 

I previously posted here in my blog that my eBook Using GIMP was going to be released within a few months. What I probably didn't spell out, is that it's part of a newly launched book series called Using under the imprint of the aforementioned QUE Publishing. But why should you care? 

To quote the site's blurb notice:
USING is more than just a book: It's the fastest, easiest way to gain the technology skills you're looking for! Don't just read about it: See it, hear it, with step-by-step video tutorials and valuable audio sidebars delivered through the free Web Edition that comes with every USING book. For the price of the book you get online access anywhere with a web connection—no books to carry, content updated as the technology changes, and the benefit of video and audio learning.
My book will be released as an eBook but not in print format, which is described as:
The Web Edition of every USING book is powered by Safari Books Online, allowing you access to the video tutorials and valuable audio sidebars. Plus, you can search the contents of the book, highlight text and attach a note to that text, print your notes and highlights in a custom summary, and cut and paste directly from Safari Books Online.
Some of the upcoming titles include Using LinkedIn, Using Google AdWords and AdSense, and Using Blogger. The whole point of the Using series is that the reader (and I use the term somewhat loosely) accesses the information using multiple media types, including text, video, audio, and web. Topics include a wide range of subjects, from Microsoft Windows 7 and MAC OS X Snow Leopard to Using Google Maps and Google Earth and the already mentioned Using GIMP.
Since this is a brand new series type, QUE wants to promote it as strongly as possible (which I suppose is part of why I'm blogging it). To that end, you've got a terrific opportunity to get a hold of and read these books for free by becoming a reviewer. I may take advantage of this opportunity myself since I have a track record as a technical book reviewer. 

Many of these books are or will become available at Safari, so if that's your reading method of choice, you won't be left out. Stroll over to QUE's Using Series web page and see if you can find something that interests you.


Thursday, April 29, 2010

Is the Release of Ubuntu 10.04 Delayed?


Today is Thursday, April 29, 2010. It's almost half past six in the morning in the western United States where I live. I started looking about the web for announcements of the production release of Ubuntu 10.04. I didn't find them. Given the time difference between me and Canonical, I figured the mirrors for the production download would be available by now. I decided to go to the source but the Ubuntu Home page still announces Ubuntu 9.10 as the latest production release. I double checked the release schedule and it does say the Lynx should be at final release on the 29th. Am I being impatient?

After I wrote the above paragraph on the news page at the Linux Tutorial, I went to work, still thinking I was being impatient and that the announcement of the release could be forthcoming. I checked again when I got to my workstation and still no change of news. I fired up Google and started searching. Then I found a notice saying [i855] 10.04 rc boots into black/blank screen. I read a number of the comments associated with this announcement (they are legion, so I didn't read all of them. One of the latest, posted yesterday, revealed the following (and the spelling errors are the sole property of the comment writer:

It is critical. People have a bricked laptop after the update. That´s the worst case scenario. Another console aware friend even called me since switchting to the terminal didn´t work as well and he´s a unix guy and had never heard of ubuntus safemode. It´s really pretty bad. Lucid is an aweseom release but this bug is a showstopper imho.

Let´s keep the fingers crossed someone has the balls to delay it. But I doubt it.
Searching for more information, I hit a page at that seems to be more up to date. In fact, the thread is so active that all you have to do is refresh your browser every minute or so to see new posts.

Amid the various posts are specific links to download Ubuntu desktop, both the 32-bit and 64-bit flavors. Don't ask me why this information is so hard to find at

Just for giggles, I clicked the available link to download 32-bit Ubuntu desktop and of course, the page was amazingly slow to load, and when it did, I only saw this: ERROR The requested URL could not be retrieved. In other words, any one with an Internet connection is trying to be the first to download the 10.04 ISO from the mirrors. Good luck with that.

A lot of the posts on the thread dedicated to today's (pending) release state that the links for downloading both the 64-bit and 32- bit versions of Ubuntu keep timing out. Either there's a problem with the release and this is Canonical's method if dealing with it, or the release is available and too many people are trying to download at the same time.

Guess we'll have to wait. The thread on the Ubuntu Forums about the Lucid Lynx release is up to 63 pages as I write this, but another page is added every minute or so. If you want to keep up with the minute-by-minute developments, feel free to visit the thread and watch the progress. For all practical purposes though, you might want to wait and attempt to locate and download Ubuntu 10.04 LTS tomorrow or the next day.

Addendum: I also found some information of a possible release delay at


Wednesday, April 28, 2010

jQuery: Novice to Ninja

Authors: Earle Castledine and Craig Sharkie
Format: Paperback, 300 pages
Publisher: SitePoint; 1st edition (February 22, 2010)
ISBN-10: 0980576857
ISBN-13: 978-0980576856

I was first introduced to jQuery a year or so ago when I read David Sawyer McFarland's JavaScript: The Missing Manual from Pogue Press, which was more about jQuery than learning JavaScript basics (as I had originally assumed). It was a happy accident though, and I discovered how to get a lot more out of JavaScript by leveraging the jQuery framework making my efforts generally more quick and less painful (well, I'm not lightning fast, but I'm not an expert, either). I've been looking for a "pure jQuery" book for a while, but there really aren't a lot of good books on the topic out there. When I saw the Castledine and Sharkie book was available, I jumped at the chance to review it.

Who should read this book? There was the usual blurb in the book's front matter about "If you're a front-end web designer looking to..." which I expected, but what are the minimal qualifications the reader should have before shelling out his or her hard earned dough for this text? Actually, the authors don't come out and say "you need to know JavaScript to such and thus level.." at first. On the other hand, they do say the reader should have intermediate to advanced HTML and CSS skill sets as well as stating some (ah, here it is) "..rudimentary programming knowledge will be helpful." Folks assume that JavaScript is "programming light", but it has the same basic rules and structure as other languages such as Python and PHP, so possessing an understanding in that area would seem to be at least a plus if not something of a requirement. Before I get ahead of myself though, it's time to move into the book proper.

Chapter 1 is pretty much a combination high-level overview for jQuery and an advertisement selling the audience on its virtues. I don't disdain this. After all, if you aren't sold on the value of jQuery for your web designs, why buy the book in the first place, right? If you're at your favorite bookstore (do people still buy books at stores rather than online anymore?) and you're trying to make up your mind about jQuery (let alone this book), skimming the first chapter should help you with your decision.

I actually got a bit of a lesson on CSS and JavaScript in the second chapter as I came across bits about adding and removing classes, event handlers, if statements and such. Like any (more or less) beginning programming book, there's always a struggle in deciding how much to assume your audience knows vs. how much to teach them in the book's content. That often spills over into deciding the style of the book. Should it be heavier on concept or hands on? I usually prefer simple, straightforward numbered steps, but while that can get you creating stuff that works, it's also important to understand why it works. Otherwise, the only thing you've learned is how to follow a list of instructions to create a specific effect. This book seems to combine these two elements, presenting the "steps" as a narrative that also contains the conceptual data.

About a quarter of the way through, I thought I'd see how much value the companion website added to the book. As usual, you can buy the book online from the publisher's site, download the sample code (so you don't have to keyboard all the code examples by hand), submit errata, and post questions on a discussion forum. I was looking for that magic combination between web and hardcopy that would make learning a new language or library streamlined. Unfortunately, I didn't find it. Let's face it, programming is hard for the beginner. I'm not trying to be discouraging, but it takes not only a lot of practice and dedication to learn programming, but you have to possess the ability to conceptualize problems as solvable using programming logic. Not everyone can do that, or at least, some people are better at it than others.

Learning from the book will be relatively easy if you are an experienced web designer with knowledge of HTML, CSS, and some JavaScript (the more the better) and/or are reasonably proficient at web programming (or have the right wiring in your brain pan for this kind of learning). I wouldn't recommend tackling jQuery if you've never written in JavaScript at all before. One person's "easy" is another person's "this is really hard." Heading into this with your expectations grounded in reality will help. To its credit, the book does start you out with a vanilla web site and shows you how to augment it with jQuery, which is the way to do it. Build the structure first, then style it, and then add the action and interaction.

I didn't test the code, so I can't tell you how well it works (and I've encountered books before where it was impossible for the sample code to produce the effect described by the book's text). You probably want to hit the book's web site and review the errata section, making some notes in the book before performing the exercises, just to prevent a few minutes (or hours) trying to solve a problem that's already known and corrected. You can also visit the discussion forum to see if other readers have had common issues with specific areas of the book. Proficiency in learning is all about doing your homework, both within and outside the text.

A lot of people, even experienced developers, will just Google the effect they want to produce, find the relevant jQuery online, download, copy and paste, and then perform a little minor tweaking to get it to work on their site. This book proposes to actually teach you how to write your own jQuery, or at least, teach you how to understand the work of others that you want to use for yourself (with the permission of those "others", of course). Expect to get around halfway through the text before you've learned enough to get you to the point of starting to write your own original code.

Starting out as a jQuery novice is easy. We all start there (assuming we're all trying to learn jQuery). How far you get into the "ninja" range depends on how far you progress into the book and how well you integrate the learning into your programming activities. While I found the book a good jQuery guide for beginners, I don't think it's one-stop-shopping as far as turning the reader into a world-class jQuery guru...not unless the reader sticks with it, goes through the book, goes through certain portions of the book again, and keeps expanding his or her knowledge and experience.

jQuery: Novice to Ninja may not actually make you the master of the art of invisibility, or even the art of jQuery, but it is a very slick book, and the best example of a jQuery book I've seen cross my path yet. If you're looking for jQuery "ninjahood", this book might not garner you that honor all by itself, but it should put you well down the right path to that destination.


Thursday, April 22, 2010

A Splash in Life's Pond

This small missive departs from my usual fare, but I just had to share it. My daughter is graduating from the University of Puget Sound next month with a degree if Graphic Design. Apparently, our local paper keeps track of such things:

Art work by Jamie Pyles, daughter of Lin and Jim Pyles of Meridian, will be exhibited in the Kittredge Art Gallery during the University of Puget Sound Senior Art Show.

Pyles, a 2006 graduate of Capital High School, uses colored pencils, watercolor and India ink to shows the "personification of a human's impulses and emotions through fantasy-type characters," she said.

Thursday, April 15, 2010

Using GIMP

This is jumping the gun slightly, but look for this eBook, written by yours truly, to become available sometime in early Summer. I'll blog more details about the book in a bit.

Wednesday, March 17, 2010

Search Patterns: Design for Discovery

Authors: Peter Morville and Jeffery Callender
Format: Paperback, 192 pages
Publisher: O'Reilly Media; 1st edition (January 26, 2010)
ISBN-10: 0596802277
ISBN-13: 978-0596802271

Whether you think "search" is sexy or not, you probably can't live without it. In fact, according to the blurb on the book's back cover, "It (search) influences what we buy and where we go. It shapes how we learn and what we believe." That's a powerful statement, and probably more true than we realize (or we wish). While most of us experience search as users, Morville and Callender provide a practical guide that allows you to build your own search applications...but how good of a guide is it? I decided to find out (hence this review).

I'm a visual learner (who isn't) and this book fairly panders to my needs and desires as a student. The high quality glossy paper used in this book helps produce very slick and vivid graphics. The first page of the Preface even has a full color cartoon strip featuring the two authors, though Peter seemed to lack some "dimension" as a toon. I guess that's the difference between a graphic designer (Jeff) and an information architect (Peter).

Humor and personality wasn't limited just to the visuals of this book. The authors managed to project their personalities into their writing along with the technical aspects of search, right from page one. Search is immediately presented as a tool that needs to talk to and interact with human beings and adapt to who we are, rather than we adapting to the "needs" of an application.

Not only is this a fun book to read, but it is really useful, particularly in communicating about both the conceptual and nuts-and-bolts aspects of search design. A great deal of information about "usability" is leveraged in the creation of this book since, without users, search is without a purpose. The whole idea of building search is building for people.

This creation is so effective that as you travel more into the technical aspects of the book, you may not notice. I got a distinct sense of being pulled along, page after page as I was reading. I can't say that I completely absorbed every single detail as I progressed, but that's more an effect of my need to understand search design better rather than any fault of the authors.

For the beginner interested in learning how to build search, Search Patterns is an excellent introduction. Yet, the book was also written for designers and information architects (but not so much for developers as far as I can see) who need to learn more about not only the current state of search but its future implementation.

The only criticism I can offer is that the book seems to be the proverbial "a mile wide but an inch deep". It is an introduction, but it won't tell you all you need to know about designing search. This book will get you started and enhance whatever knowledge you may already possess, but once your appetite is whetted, you'll want more.


Monday, March 15, 2010

Ubuntu 10.04: Waiting for the Lucid Lynx

This'll be short. I read a review of the current incarnation of Ubuntu 10.04, code named "Lucid Lynx" at the In a Tux blog this morning. The author pointed out a number of flaws, great and small, with the Lynx but finished up the review by saying, "This version of Ubuntu 10.04 is not a stable or final release of Ubuntu, so some of these thing my change. Please do not judge them to soon" (and the spelling errors are the sole property of the In a Tux author).

Since the Daylight Savings Time change has "jet lagged" me into near-incomprehensibility (and that's hard to spell when you're really tired), I wasn't quite sure when the Lynx was to be released and I decided to look up the release schedule at Ubuntu.

According to, we've got a solid 6 weeks until the Final Release becomes available on April 29th. In fact, the Beta 1 release is still four days (March 18th) off as I write this. While some betas can function almost as well as the final product, you should expect a beta and particularly an alpha (and the best the In a Tux author could have been working with is Alpha 3), to have a few, or more than a few, outstanding bugs.

I'm not being critical of reviewing pre-release software and in fact, it's a necessary part of the development process, particularly in the open source world where all contributions are important. Yet, I agree that the Lynx shouldn't be judged too harshly while still in the womb, so to speak. You can find all of the currently known Lucid Lynx bugs at, so you'll know what bumps in the road to expect if you decide to sample the Lynx as it exists today.

I'm particularly interested in this particular release of Ubuntu since I've been using the previous Ubuntu LTS 8.04 Hardy Heron and am looking forward to upgrading. For those of you in the same boat, if you want to keep current on the moment-by-moment (almost, anyway) changes to Lucid, you can sign up to receive email notifications.

Waiting for something can be difficult and, after all, in the world of technology, six-weeks is almost an eternity. If patience is your virtue though, April 29th is right around the corner.


Friday, March 5, 2010

Converting a PDF to a Word Doc with KWord

I was posed with a challenge yesterday and fortunately, the challenge was cancelled. Let me explain why I say "fortunately". At my day job, my boss wanted me to convert a document produced in LaTeX to a Word document. I work with LaTeX in Kile and this isn't an option that seems available. The native output of my little set up is PDF but the PDF to Word doc conversion options didn't look promising either.

As I said, the immediacy of the challenge was cancelled and another solution was found, but the request could come up again and I thought it would be nice to find an answer now while I have a bit of time on my hands. Long story short, I haven't found a way to convert LaTeX to a Word doc format, but there is a way to open a PDF and save it as a Word doc, using KWord.

I did a fair amount of searching and finally discovered an article at the blog. It's older information...almost three years old, but I thought I'd see if the solution works, since it promises to be able to open a PDF, save it as an odt or doc, and preserve the formatting. This last part is important, because I really need tables in the PDF to still be tables in the doc.

I dutifully installed KWord on my Ubuntu machine and gave it a shot. While the latest incarnation of KWord does a more or less OK job of preserving format, it is far from perfect. Here are my examples. The first image is the sample PDF page I chose to work with. No, it doesn't have tables, but I'll get to that in a minute.

The second image is the same page opened in KWord. Not exactly a stunning likeness of the first image, but it is pretty good. That said, I tried it on the actual pages I had been asked to work with yesterday and the table formatting completely disappeared when imported into KWord.

I looked at the example of the process in the 2007 blog article vs. what I performed, and the steps and features seem identical. While it looks like KWord (as part of KOffice) is continuing to be developed and maintained, this particular feature doesn't appear to have changed much, if at all, in the past almost three years.

I guess I can't complain too much. This is the closest I've come to solving my little problem, but if converting a PDF to a Word doc is a task on someone's plate at KOffice, I humbly request that it get a little more attention. It would be a big help. Honest.

Afterword: I regularly use Writer to convert odt and doc files to PDF and it works just great. Too bad the abundant resources being fed into OOo development can't also be used to include reversing the process.

Thursday, March 4, 2010

When Did We Forget about Big Brother?

I remember a time when every criticism about Government surveillance invoked George Orwell's classic novel 1984 and the spectre of Big Brother. In Orwell's novel, Big Brother had sort of an appearance, but it was made deliberately vague and we couldn't be sure if he represented an actual character or was more a projection of "the Party". I don't hear much about Big Brother anymore, which is odd...but then again, maybe it's not so odd.

When's the last time you worried about your privacy? Sure, maybe you worry about it all the time, particularly when your very identity can not only be invaded, but stolen and used for all sorts of purposes, not the least of which is to buy just tons of stuff using your name and credit card number.

While we still register danger at the thought of identity theft for the purpose of fraud, when's the last time you considered just how many people and agencies have access to the most intimate details about your life? How many private and public databases contain your name, date of birth, social security number, and a raft of other sensitive information about you? I'm talking about those entities that you've authorized to possess such data, never mind Government security agencies and the like (as if the NSA really cares about what you say on your cell phone).

I was prompted to write this blog by the use of something with a rather benign name: Einstein technology. More specifically, today's article Feds weigh expansion of Internet monitoring in which Homeland Security Secretary Janet Napolitano assures the American public that the DHS's proposed plan to extend the Einstein technology, now monitoring the public areas of the Internet, into private networks won't constitute an invasion of privacy. Really?

The Einstein technology is designed to "detect and prevent electronic attacks, to networks operated by the private sector" and was created for use on federal communications networks, however, according to the CNET article, the latest version of Einstein can read email content and AT&T has been asked to test its capacities on their system. In response to concerns about the proposed use of Einstein, Greg Schaffer, assistant secretary for cybersecurity and communications warmed my heart by speaking thus:
"I don't think you have to be Big Brother in order to provide a level of protection either for federal government systems or otherwise," Schaffer said. "As a practical matter, you're looking at data that's relevant to malicious activity, and that's the data that you're focused on. It's not necessary to go into a space where someone will say you're acting like Big Brother. It can be done without crossing over into a space that's problematic from a privacy perspective."
Nice to know my "old friend" Big Brother has been let out of history's basement for a breath of fresh air.

Not that the boogie man of Cyberterrorism is anything to sneeze at (and it shows up often enough in fiction, such as the recent film Live Free or Die Hard). I fully believe that security must be established and maintained along our electronic and cybercommunications frontiers as well as any of our physical borders, and that insufficient protections invite attack, but there's always a price to be paid.

In 1968, the first federal seat belt law for motor vehicles (except for buses) came into being. Few people argue that seat belts save lives and provide a measure of protection in car accidents, but the cost of that protection is the loss of a certain amount of freedom within the interior of the car. Just ask any parent who's tried to turn around at the wheel (while the car was stopped, of course) to yell at misbehaving children in the back seat. Potentially save your life, vs. some lack of mobility. Seems like a reasonable trade off.

The trade off for having more (but is it enough?) security when flying on an airplane is to have you and your personal property scanned and searched by federal officials. Less likelihood of a terrorist planning a bomb on your flight vs. having your body wanded and your luggage ransacked. Do you consider that a tough choice?

What is the trade off for protection against Cyberterrorism? What are the dangers and how much are we as citizens willing to surrender for protection from said-dangers? Are we talking about defacing the IRS website, a DDoS attack against the INS database servers, or what? How imminent is the threat?

Turns out this is nothing new. According to an example cited at Wikipedia:
In 1999 hackers attacked NATO computers. The computers flooded them with email and hit them with a denial of service (DoS). The hackers were protesting against the NATO bombings in Kosovo. Businesses, public organizations and academic institutions were bombarded with highly politicized emails containing viruses from other European countries.
1999? Certainly Governments have gotten better at protecting themselves against such intrusions since them. Yes they have. Enter Einstein. Of course, we have to assume the tools to create such attacks have gotten better, too. Still, is all this worth the possibility of having your private or business communications potentially accessed?

You can't really say that's a personal choice. The private sector is being asked to cooperate and to allow Einstein in the door, so to speak. It's not like wearing a seat belt where you could say "screw the rules" and take the risk anyway. It's more like getting on a commercial flight where you don't have a choice. You will be scanned and potentially searched. Well, yes you do have a choice. You can choose to drive or take a train (do they still have trains?) if you don't want to put up with the intrusion, but travel will take longer and getting from San Francisco to Tokyo is kind of tough by car without the world's longest bridge being available.

In your personal life, you probably spew just a ton of personal information in social networking venues such as Facebook and twitter, but we're not talking about Einstein peeping in your bedroom least not at this point. In your business life, you are likely required or at least expected to use email and other forms of electronic information transfer and data storage. As far as the company is concerned, when you use their computers, servers, and email, the information that moves across belongs to them. Now, at least to some degree, security for your company is not just the business of your company, it's the business of the Federal government, too.

I Googled "cyberterrorism" to try to get a handle on just how real this threat is, but it's a topic presenting too much data, a lot of it being conflicting. Einstein and the DHS is a specific example of how Governments tend to operate. Like programs such as Health Care or the Stimulus, plans are created and then enacted upon masses of people, some who don't mind and others who object, and yet all experience the same impact. It's like turning on the lights in a bedroom. Maybe one person wanted to read a book but the other person wanted to go to sleep. In a house, the reader can go to another room, but a nation is just one big room. In effect, so is the Internet and so is Einstein's potential for peeking through your company's windows or mine.

Despite everything I've just written, I don't wind myself up so I can't sleep at night worrying about this stuff. One of the reasons I figure Big Brother isn't talked about much anymore is that we've all gotten used to the idea that we don't have a great deal of privacy anyway as individuals or corporate entities. As long as it doesn't have a visible impact on our day-to-day lives, most people don't care what information is gathered about them. At this point, it's being proposed that Einstein enter the private sector but not the private home. Business is being asked to cooperate with the DHS to ensure the greater good, and whatever information is gathered, is to be squirrelled away behind the "national security" curtain for the country's protection. Is it worth the trade off?

Afterword: While I was writing this article, I was struck with the urge to look up an old textbook I used back in the late 1970s, The American Police State: The Government Against the People by David Wise. As I recall, it's about the abuses of the Nixon administration against corporate entities and private citizens in the cause of suppressing dissent against the administration's interests. The site summarizes the book in part:
This contribution to the spate of books to emerge out of Watergate was one of the better efforts. Two chapters concern the CIA -- one on domestic surveillance and the other on CIA involvement in Watergate. Additional chapters include the FBI and IRS and their role in suppressing domestic dissent, and the machinations of the CREEP plumbers, Kissinger, and black- bag jobs in general. His final chapter is an editorial against the official methods: "If we accept the values of the enemy as our own, we will become the enemy."
I wonder why I started thinking about Wise's book now?


Wednesday, February 10, 2010

Google Buzz: First Impressions

I've been hearing a lot about Google Buzz lately and lo and behold, it shows up in Gmail this morning. Initially, I ignored it, but I visit my Gmail account quite often and so figured, "what the heck". As I was going through the set up process (which isn't really involved), I was inspired to open up Google Wave for the first time in more than a month. I saw a few new Waves, but nothing like the flood of unread messages I'd expect if I just ignored Gmail for about six weeks. I've written a couple of blogs on Wave, including an an initial review and an update called Why Hasn't Google Wave Gone Viral? My interest in Wave has waxed and waned and now that Google has thrown Buzz into the mix, was I supposed to get excited?

Frankly, I feel like I must be missing something. I started following a few people on Buzz, particularly Jesse Newhart, and in reading the various discussions he's started, a lot of people seem completely thrilled about Google Buzz. I did some searching, trying to discover the Buzz potential and amazingly, I even found an article published at Business Insider called Is Google Buzz a Facebook Killer?

Frankly, I find it difficult to keep up with Buzz, although I've gotten to the point where I'm on top of twitter. I use TwitterGadget in iGoogle, where I spend much of my time when I'm in front of a PC, so I can keep up on tweets and still do my other work. But I have to visit Gmail to find my new "buzzes", if that's the correct term. Not that I don't open Gmail a great deal as I mentioned, but I don't have it open constantly. If traffic in Buzz is supposed to be as frantic as in twitter, assuming a fairly large number of followers, then you'd have to keep an eye on it more or less all the time.

I found myself thinking of the famous quote from Fellowship of the Ring:
One Ring to rule them all,
One Ring to find them,
One Ring to bring them all
and in the darkness bind them.

Between just how many different interfaces must I bounce in order to satisfy my need to be social?

If Google could put Gmail, Buzz, and Wave in one place or at least make easy and quick connections between them, and then get them to all really talk back and forth to twitter, Facebook, and the like, and then give me one place to aggregate the whole thing, it might serve my needs. Of course, I don't know that's not what Google has in mind for all this; it just seems that with Wave not having "found it's feet" yet, so to speak, launching another big social networking app just for giggles is a tad much.

OK, I'm not throwing the baby out with the bathwater (yes, that's an old cliche but the old curmudgeon in me made me say it) and I did title this article "First Impressions". Not dismissing Buzz completely out of hand, I haven't exactly fallen into "early adopters euphoria" over it either.

To give it a fair test, if you follow me on Buzz, I'll follow you back...unless you're a slutbot (I had to block the first one in Buzz just a few minutes ago) or a spammer.

What do you think?


Tuesday, February 9, 2010

SourceForge Lifts the Block: The Power of Negative Publicity

I woke up this morning to Joe Brockmeier's blog and the happy news that SourceForge has decided to lift it's block against the various nations the United States has placed on its embargo list. I had blogged on the original ban announcement and was pleased to see further action had been taken. Actually, the entire matter is not quite as clear cut as it may seem.

First, you'll recall the original announcement by SourceForge that it was establishing a denial of site list in order to comply with United States legal requirements, banning nations such as Iran, North Korea, and Syria from being able to access any of the open source projects hosted at SourceForge. Of course, that didn't just ban the governments running various totalitarian regimes in these nations, but also every single citizen living in the banned countries. In essence, the United States, and by its legal compliance SourceForge, was restricting people from access to open source projects just because of where they lived.

I didn't blame SourceForge for this (although plenty of people did). When you get a legal order from an entity that has the right to issue and enforce legal orders, if you are law abiding in your nation of residence, you comply with the order. SourceForge had nothing to gain by "bucking the system" and could ultimately do more harm than good to the open source community by telling the U.S. Government to "go pound sand".

Things have now changed. SourceForge has decided to lift its ban according to their announcement last Sunday but that doesn't mean it's "business as usual". SourceForge has put the responsibility to allow or deny access to projects in the hands of the individual project administrators. This makes a lot more sense when you consider that not all projects universally are banned from being disseminated by the U.S to embargoed nations.

Is full access to all the projects at SourceForge completely restored? No. Access is now determined on a project-by-project basis by the project administrators themselves. SourceForge is only involved to the degree that it has allowed project admins this level of control over project access on the SourceForge site. Most of the comments made in response to this action, at least from U.S. developers, are really positive. Non-U.S. folks tend to still slam the U.S. embargo list if not SourceForge, including one German fellow:
I am not an U.S. citizen, so I give a fuck on U.S. laws. We Germans are allowed to export anything to anywhere. Also our encryption mechanisms. So it’s all right for me.
I guess you can't please everyone.

For instance, Brockmeier's blog included a link to's take on the SourceForge matter (despite the fact that the embargo list doesn't affect just Arab nations). You can read first hand, the thoughts and particularly the emotions this entire incident has evoked, as written by Abdulrahman Idlbi, who "is computer engineering master’s student at King Fahd University of Petroleum and Minerals", in his guest editorial.

Did SourceForge do the right thing? Yes. The overarching principle of open source is to be accessible to everyone, and I mean everyone. Politics, ethnicity, gender, and any other differences and divisions simply don't matter. Open source, at its finest, functions to unite people, or at least developers, all over the world, in a common and peaceful endeavour. OK, the real world doesn't work that way, but as I said, this is an ideal. I think we found out pretty quickly that political and ideological differences kick in with a vengeance (see the comment from the German fellow and the ArabCrunch article) when you throw a monkey wrench into the machine.

Open source is an ideal but this entire sequence of events has illustrated with great clarity that we human beings, all of us, have a long way to go before we even approach this ideal with how we think, feel, and live.


Friday, February 5, 2010

Fourth Annual Web 2.0 Expo San Francisco Focuses on Platforms for Growth

This information appeared in my inbox a day or so ago, so I thought I'd pass it along. Feel free to do the same. Cheers.

San Francisco, CA, February 4, 2010 — O'Reilly Media, Inc. and TechWeb, producers of Web 2.0 Expo and Web 2.0 Summit, today announced the return of Web 2.0 Expo San Francisco, the annual event for designers, developers, entrepreneurs, marketers, business strategists, and venture capitalists building a web for the 21st century. This year, Web 2.0 Expo centers on the theme of "Power of Platforms," helping businesses choose and leverage the right web platforms for success. Web 2.0 Expo San Francisco is May 3 - 6, 2010 at Moscone West.

"It's not just about how companies use the web as a platform," said Sarah Milstein, Web 2.0 Expo Co-Chair and TechWeb General Manager. "It's about how they fit into an ecosystem or create new, larger ones. What we're interested in this year is the new wave of companies that have the potential to build new economies and exploring exactly how that is done."

Celebrating its fourth year, Web 2.0 Expo continues its tradition of inspiring and educating the tech industry by providing unparalleled educational programs and valuable networking opportunities. This year, the event launches One Day Intensives, "mini-conferences" that feature expert speaker panels in a participatory classroom experience. Web 2.0 Expo Intensives feature "Lean Startup" with Eric Reis of and "Applied Communilytics" with Sean Power and Alistair Croll of

Web 2.0 Expo San Francisco will consist of a multi-track conference, an "unconference" program called Web2Open, a major tradeshow and many networking opportunities and events. Conference tracks include: Strategy & Business Models, Social Media Marketing, Design & User Experience and Development with six focus tracks on Mobility, Community, Real-time, Analytics, Enterprise and Cloud Computing.

Web 2.0 Expo San Francisco 2010 welcomes supporting companies, including Platinum sponsor Microsoft; Gold sponsors Adobe, Invest in Germany, and IBM; Silver sponsors Berlin Partner, blueKiwi, EffectiveUI, HP, Neustar, OpenSRS, OpenText, The Planet and SOASTA.

To learn more about the 2010 Web 2.0 Expo San Francisco or register visit:
For articles, blogs, photos, videos, and speaker presentation files from Web 2.0 Expo SF 2009, see:

To read the O'Reilly Radar, visit:

If you have ideas about areas you'd like to see included at the conference, send a note to:

If you'd like to stay up to date on information relating to Web 2.0, sign up for the conference newsletter (login required):

About TechWeb
TechWeb, the global leader in business technology media, is an innovative business focused on serving the needs of technology decision-makers and marketers worldwide. TechWeb produces the most respected and consumed media brands in the business technology market. Today, more than 13.3* million business technology professionals actively engage in our communities created around our global face-to-face events Interop, Web 2.0, Black Hat and VoiceCon; online resources such as the TechWeb Network, Light Reading, Intelligent Enterprise,,, and The Financial Technology Network; and the market leading, award-winning InformationWeek, TechNet Magazine, MSDN Magazine, Wall Street & Technology magazines. TechWeb also provides end-to-end services ranging from next-generation performance marketing, integrated media, research, and analyst services. TechWeb is a division of United Business Media, a global provider of news distribution and specialist information services with a market capitalization of more than $2.5 billion. *13.3 million business decision-makers: based on # of monthly connections.

About O'Reilly
O'Reilly Media spreads the knowledge of innovators through its books, online services, magazines, and conferences. Since 1978, O'Reilly Media has been a chronicler and catalyst of cutting-edge development, homing in on the technology trends that really matter and spurring their adoption by amplifying "faint signals" from the alpha geeks who are creating the future. An active participant in the technology community, the company has a long history of advocacy, meme-making, and evangelism.

O'Reilly conferences bring together forward-thinking business and technology leaders, shaping ideas and influencing industries around the globe. For over 25 years, O'Reilly has facilitated the adoption of new and important technologies by the enterprise, putting emerging technologies on the map.

O'Reilly Conferences.