When I reviewed the Sonos system back in late 2008, I absolutely fell in love with it. I listened to more music while I had the Sonos review units than I had the entire previous year. I was extremely disappointed to have to give them back when the review period was over. Since that time, I've wanted to own my own Sonos gear, but frankly the price was prohibitive. Not only were they expensive themselves, but they requires speakers. I've never had high quality speakers of any sort, and adding those to the cost of the already expensive Sonos system was just not practical for me.
Sonos has changed their line-up a bit since I reviewed them, though. They now offer their wireless magic packaged into two different speaker selections: the Play:3 and the Play:5. They're still expensive, but the bundled units felt like a more approachable price. Additionally, they consumed less space than the older product plus external speakers. I decided to splurge and buy one of each, plus the requisite Zone Bridge to tie them together. This was ostensibly a Christmas gift for the whole family.
I waffled for a long time before committing to the purchase, because it's a sizeable investment in something we didn't really need. Moreover, I considered just getting a pair of Play:3 speakers but finally decided that the line-in on the Play:5 was a worthwhile investment should we ever decide to use it as the speaker for our movie watching. I ultimately reconciled the high price with the knowledge that this was an investment into something we'd be enjoying for a very long time.
Since they arrived, we've been making great use of them. Whether we're streaming music from Pandora, or listening to our collection of MP3s, or just streaming NPR stations, we've been using them daily. We listen to more news programming while at home -- while eating breakfast or eating dinner -- now than ever before. Sure, a simple radio in the kitchen might have accomplished this same task, but being able to use our various smartphones and tablets to find programming from all around the world is a real joy.
And being able to switch from news to music with the push of a button is simply wonderful. The kids both have the Sonos app installed on their iPod Touches, and enjoy controlling the streaming music for the whole house. They can crank the music in the living room while romping with Josephine while Angela and I can have a conversation in the kitchen while listening to the same music at a more modest volume.
The Sonos system was as easy to setup as it was when I reviewed it three years ago. It's simple, and it just works without any surprises or fuss. I was a little disappointed to find that Last.fm requires a paid subscription in order to stream to Sonos, but Pandora works just fine. And of course I have tens of gigabytes of MP3s of my favorite music available to stream at any time. All in all, the Sonos has been a terrific purchase.
For the last umpteen years I've been using a Linksys WRT54G (hardware version 2.2) wireless router. I've used OpenWRT, dd-wrt, and most recently Tomato custom firmware images to allow me to do things that the stock firmware didn't fully support. Through all of these permutations I've always had an open wireless network, both because I've always felt it's a pain to type in passphrases on client devices and because I think it's important to provide wireless access to neighbors in need. To date, no one has egregiously abused the wireless access I've made available to them.
But the WRT54G is getting long in the tooth. I have a couple of devices that can speak Wireless N, as well as a couple of gigabit capable devices. The WRT54G is limited to Wireless G and 10/100 Ethernet ports. I've also had a desire of late to split off our personal network traffic from that which I make available to my neighbors, so I've been looking at wireless routers that support a so-called "guest network" option. Finally, the WRT54G sometimes simply flakes out, and requires a power cycle in order to work properly. That's usually not a big deal, but it can be a real pain during the middle of a movie we're streaming from Amazon.
Today I bought a Linksys E3200. It has most of the features I want: A/B/G/N wireless support, 10/100/1000 Ethernet, and a guest network. It has support for Dyn dynamic DNS, a service on which I rely; though it sadly does not support DNS-O-Matic.
Setup was easy, and I was able to get everything configured without any real hassle. It wasn't until I was all finished that I noticed no local domain name was being supplied to clients. Indeed, there are no local DNS controls anywhere inside the E3200 configuration pages (that I could find) which means that none of my local clients could address one another by name. This meant my Sonos couldn't find my NAS to stream music, and the Mac connected to my TV couldn't find the NAS to stream movies.
With the WRT54G -- through all the custom firmwares I used -- I had a local caching DNS resolver that was able to arbitrate local client names. I used skippy.lan as my local domain name, and have come to rely on foo.skippy.lan being a resolvable address.
The workaround for this problem was to install and configure DNSmasq onto my Pogoplug NAS, and then to configure the E3200 to tell clients to use the Pogoplug as their primary DNS server. This somewhat defeats the purpose of the E3200, because it means that my network is now dependent on two devices for full (internal) functionality. It also places just a little more load on the Pogoplug, a device with extremely limited resources that I'm trying to maximize.
After a couple hours of use, the E3200 works as expected, so I'm moderately satisfied with it. I may try to flash a custom firmware onto it in the future, but for now I'm just going to stick with the stock image and see what happens.
We have a small collection of books in Josephine's room, and I've been trying to get into the routine of reading a few pages to her every night before bed. At just eight months old, she doesn't have the attention span to sit through all of even a Sandra Boynton book, and she's more often interested in trying to gnaw on the books than look at the pictures. Thank goodness the various board books we have were specifically designed for just such abuse, and Josie delights in trying to eat them. I know she also enjoys the warmth of my presence and the sound of my voice, so as we continue with the routine of reading before bed she'll start to listen to the tales rather than try to ingest them.
Last night I grabbed at random a book from Josie's bookshelf. I was surprised to see that it was My Go to Bed Book. It's an old, tattered book with the cover just barely hanging on. On the inside front cover is written "To Scott, from Mommy and Daddy - Christmas 1976".
The book I read to my daughter last night was the very same book my parents had read to me.
As I read the words, a part of my mind wandered off to ponder the relative value of physical books and e-books. I don't think there's any way an e-book released today would be readable three decades from now. One need only consider the examples of floppy disks or Zip disks to understand the relative impermanence of any specific piece of storage technology. Do we seriously think that today's Kindles and Nooks will work even ten years from now? And that's just the physical reader, and doesn't deal at all with the electronic format of e-books which will no doubt continue to evolve. Have you tried to open a WordPerfect document, lately?
I'm all for technological advancements, but I also think there's something very important about the legacy of physical objects. There's not the same sentimental value to an e-book as there is with a physical book. If the Go to Bed book avoids my daughter's teeth there's the very real possibility that she can read it to her kids, and share with them the fact that I had read it to her, and my parents had read it to me.
Would it mean anything to my grandkids to know that the antique Amazon Kindle on which they might read an e-book was the same once-shiny-and-new Amazon Kindle on which I read the same e-book?
The internet is an indispensable resource for people, providing quick access to everything from news and stocks, to weather, and sports. Many people bookmark the sites they read on a regular basis, and making the rounds to read these sites constitutes a daily ritual, whether it's over morning coffee or during their workday lunch break. Keeping up with all the sites in your bookmarks can be a daunting task. Some websites update on regular schedules, others update randomly throughout the day, and still others update so infrequently that you never know when an update will occur. Various solutions have been put to the test in the past, ranging from the failed Pointcast news-delivery screensaver, to notify-by-email systems popular on many professional news sites.
For many folks who spend most of their day online content syndication is the only way to keep up with the tide of information updates. Rather than manually browsing all of your bookmarked sites, you use a computer program to do all that boring work for you, and prepare a complete list of all the latest information. If a website hasn't updated yet, you don't need to waste your time checking it. If all of your favorite websites have updated, you can read all of that fresh content in a single sitting, without waiting for the pages to load or looking at the annoying advertisements. And you can be more efficient by skimming the headlines of new stories to decide if it's something you really want to read now, or save for later, or skip altogether.
You may have seen these orange buttons on websites: . These buttons are links to eXtensible Markup Language, or XML, versions of the web pages we read. Although XML documents can be read by human beings (with a little effort), they're designed to be read and processed by computer programs. Clicking them in your browser will produce various results, depending on what web browser you're using. The links aren't really for readers, but for programs called aggregators.
Aggregators, or feed readers as they're also called, are the programs that do all the work of visiting and collecting web updates for you. When new content is available, the aggregator fetches the XML data and makes it available for you to read in a friendly way. Aggregators generally list new items first, so you can quickly skim all your feeds, or subscribed sites, in the order in which they were updated. There are many different aggregators available.
You, the reader, tell your aggregator that you want to subscribe to a site's feed. Most of the time, you simply enter the URL of the website you want to follow and your aggregator will automatically locate the XML version of the content. Then, the aggregator will periodically check for and fetch any new updates.
Some aggregators are programs that run on your computer. For these to work, your desktop or laptop computer needs to be on and connected to the internet. If you use an aggregator on your PC at home and on your PC at work, you may have to skip past the content you've already read, which rather defeats the purpose of aggregating the content to begin with! Thankfully, there exist also several web-based solutions. The advantage to using a web-based aggregator like Google Reader, Bloglines or NewsIsFree is that their computers do all the work of polling websites and obtaining updates, and you can read your list of feeds from wherever you may be using nothing more than your web browser. This means that you can check your list of feeds while eating breakfast at home, and then see a complete list of updated information during your lunch break at the office. Web-based aggregators also work well with smartphones, allowing you to keep up with your feeds while on the go!
Content syndication saves website readers time, but it can save website operators money. Visitors using a web browser to read a website request and receive the entire site every time: text, graphics, layout information, advertising, and anything else that might be on (or in) the page. Web browsers can cache (or "remember") some of this information, but there are a variety of reasons why this doesn't always work, and the visitor ends up downloading most everything from that page every time they visit, regardless of whether there's anything new (actually, advertising often causes this, as the advertisements change every time the page is loaded). Aggregators visiting a site's feed first check the timestamp of the XML feed: if it hasn't been updated since the last visit then the aggregator immediately stops. When updates are available, aggregators will receive only the content from the website, without advertising, background images, navigation buttons, and the like. Combined, these can prevent vast amounts of unnecessary traffic, allowing content producers to reach their audience without incurring astronomically expensive web hosting bills.
Several tangential benefits also arise from using content syndication. First, syndication-specific search engines look through feeds, allowing you to maintain a constantly-updated list of links to information in as close to real-time as currently possible. Second, the machine-readable format of feeds makes it possible to create mashups using syndicated data -- much easier than personally visiting the pages to copy-and-paste the bits you want. Third, syndication can be used to include content from other sites into your own website.
Of course, there are plenty of challenges with content syndication. The biggest challenge is the machine language format used for the feeds. Although feeds are in the XML language, there are a variety of popular dialects to that language. Some feed-reading programs can speak them all, while others are limited to just one or two. There's a joke that succinctly explains the situation: "The great thing about standards is that there are so many to choose from!"
The most common syndication format is RSS (which stands for Really Simply Syndication, or Rich Site Summary, or maybe RDF Site Summary), which is itself a little misleading because there are nine different types of RSS. The history of RSS is complicated, with several competing parties vying to establish the definitive standard. In common practice, only two or three of these formats are regularly used, but even that's too many.
The other dominant syndication format is Atom, which is a community-driven format that is trying to avoid many of the perceived shortcomings of RSS. Atom isn't (yet) as widely-supported as RSS, but it's quickly gaining ground. Lengthy discussions wage on about these competing standards.
Thankfully services exist that translate syndication formats, so you don't need to worry about which format is winning the debate. Web-based feed readers should all be smart enough now to handle any feed format, so you the reader shouldn't need to worry about this too much. If you're a content producer, it's worth spending some time to familiarize yourself with the various formats, so that you know what you're offering to your readers.
Another issue with content syndication is that the syndication source (ie: the website offering the feed) chooses whether to provide the full content of new posts, or just an excerpt. Advertising-driven websites will often provide just an excerpt, or teaser, to tickle your fancy in order to get you to load the webpage in your browser and thus see the ads that are displayed there. Some such sites will send out the first couple of sentences for their feeds, which may or may not provide enough information for you to determine whether it's worth your time to follow the link to the story. Other sites will carefully craft meaningful summaries of new items which you can quickly skim and decide whether to read the in-depth report.
Many syndication feeds come from personal weblogs, but big businesses are recognizing the value of the technology. Reuters offers feeds for its news items. The BBC offers categorized news feeds. Microsoft offers feeds for developer resources. Apple offers RSS feeds for its iTunes Music Store to display new releases and top rated songs or albums.
I've been using Google Reader as my aggregator for some time now, and have been thoroughly pleased with it. It offers a nice suite of features, good performance, and the traditional simple Google user interface. If you're not yet using an aggregator, or are unhappy with the one you're using, consider giving Google Reader a shot.
What are you waiting for? Start aggregating!
Friday afternoon I attended the Exploring Learning Technologies UnConference at OSU. I've attended a number of unconferences in the last couple of years, and I generally like the format. There is, of course, great potential for a lousy conference if the attendees don't know that the event is up to them. I didn't know anyone signed up for ELTU, so I was a little trepidatious about what I might get out of this specific event.
The registrations were limited to 50 people, so it was going to be a smaller event than some I'd attended. We were in a single large room, and it was explained to us that the sessions would occur in the four corners. As with any unconference, if you weren't getting anything of value from the session you selected you were encouraged to get up and move to a different session. It's been my experience at previous events that people are simply too polite or too self-conscious to leave a session if it involves getting up, opening a door, and exiting. Having all the sessions in a single room made is substantially easier to float between sessions if one was unsatisfied.
Another benefit of having sessions in the same room was that we could all overhear some of the discussions going on. So even if your session was satisfying, you might catch a snippet of another conversation that was even better, and thus switch gears. Each corner of the room had a computer connected to a projector, and these were used to take notes on a wiki, which was displayed on the walls. So even if you couldn't hear a conversation that was taking place, you could see the notes that were being recorded, and use those to decide whether another session might be a better use of your time.
Interestingly, I didn't see many people switch sessions. I think this is because the scheduling process we used helped ensure that people got to attend sessions in which they had a genuine interest. At the unconferences I've attended previously, people self-select to present or lead a discussion, and place their presentation on a free spot on the conference schedule (often a corkboard with index cards and thumbtacks). At ELTU, however, we did things a little differently.
When we arrived, we each picked up name tags with our names printed on them. Beneath our names was a blank space labeled "My Tags". We were instructed to write down three or four keywords describing our interests for the event. When everyone had arrived, we quickly went around the room introducing ourselves (name and department), and read aloud the tags we had written down. The organizer of the event jotted these down, and those that were repeated multiple times instantly became candidates for session topics. With the potential topics thus identified, we then worked together to place them into the schedule. This worked surprisingly well, as several participants needed to leave early, and were thus able to get the sessions in which they were interested scheduled first, so that they could attend.
There were no dedicated speakers, and no specific facilitators in the sessions. Instead, it was an open discussion. This could have really backfired, if any one attendee hijacked their session; but thankfully that didn't seem to occur. Dynamic conversations seemed to be occurring around the room, and everyone seemed to be pretty well engaged.
Without specific speakers, the participants were left to their own to take value from the sessions. I can't say that I learned a lot, but I learned about a lot of thing and collected a pretty comprehensive list of tools and technologies to investigate later. More than anything, though, events like this are good for meeting people. Some of the folks present were looking to implement technologies with which I had a lot of familiarity, so I was able to make suggestions. Others were wrapping up implementations of stuff that I want to do, so I had an opportunity to pick their brains. We all had something to share, and I'm really looking forward to following with some of these folks in the weeks ahead.
I learned about edupunk; had a fascinating and wide-ranging discussion about generational differences and how they affect people's attitudes toward learning technology; and learned about some of the cool things different units are doing on campus to share information in new ways (the Open KSA stuff going on at the architecture department was particularly interesting). One of the most entertaining moments for me personally was when Chris Hill, one of the event organizers, said he had recently heard someone state that The "twi" prefix is the new "cyber". It was surprising to find myself being quoted at an event I was attending! :)
After the final sessions we had a small wrap-up / debrief, in which we discussed the things that we think worked, and the things that could use some improvement for the next time around. One of the participants expressed a desire for a formal schedule ahead of time, so that she could know which sessions she wanted to attend. This is directly contrary to the entire notion of an unconference, so it's unlikely to occur. It was suggested instead that specific "tracks" be created, with focused topics developed within each track the day of the event. This would allow folks to better prepare while still preserving the unconference format.
Another attendee expressed concern about the general ambiguity of the event before things started. There was no way to really gauge whether topics would be discussed that were of interest to you, and there was no way to really know whether the event would be a useful way to spend one's time. Finally, several people expressed perplexity about related events that were discussed. "What's a podcamp?" someone asked, when PodCamp Ohio was mentioned.
Late last year I wrote about some of my experiences at recent unconferences, and I specifically pointed out the problems of naming and how to know ahead of time that an event will be worthwhile. I think some of these questions will get resolved over time as more people attend unconference and related events. And as more people get experience attending -- and thus participating in -- unconferences, the format will continue to improve.