Stefan Hayden

Alt + Shift + Ctrl + K
Archive for the 'resources' Category

Well Google Analytics is out and now every other stat counter is doomed. Right? Who can stand up to google? I refuse to believe any of this is true and that Google’s entrance in to stats will make little impact with Mint or Measure Map.

Who has made the claim that google will crush all and have actually used Analytics? Wow it’s complicated. There are so many menus and half the time I have no idea what the graph is even representing. A large portion of the interface is also devoted to adwords campaigns and detailed study of how people move though your website. More then 50% of Analytics is devoted to stat tracking that 90% of the people using it for free will have little use for.

Is google making a mistake? No of course not. Google is just not trying to get me to use their software. What they are really looking for is a way for people to get more out of adwords and spend more money with google. This will be good for corporate or high volume websites that advertise through adwords but few others. Google is going after the fortune 500 companies who will dump tons of money in to adwords. That is who their site is optimised for.

Now lets look at Mint and Measure Map. Who are they optimised for? While I have not used Mint there are some very glaring things that show it might not be for everyone. For one it only works in Firefox and Safari. Second it’s biggest focus is on referrers and how people came to your site and less on what they did when they actually got there. That alone shows that this is not a stat tracker for every one and Sean Inman has always been forward about that. He’s not trying to compete with Google. His market is the small web site owner who wants to know hoe people are coming to his site. Google offers the same functionality but it’s berried in adwords specific jargon and other tracking that most sites have no need for.

Measure Map is so easy to understand

Measure Map is even another step down. It’s stats in it’s most simplistic form. It gives you exactly what you want to know and no more. Even my grandmother would understand the stats that Measure Map is putting out. Another things no one is pointing out is that Measure Map is blog specific. If you don’t have a blog you can’t even use Measure Map. This is not a site wide solution and there is no reason to be. I’ve only used it for a couple of days and I’m amazed how such a clean and simple interface can display so much information. Websites like this help remind me that graphic artists like Sean Inman, Jeffrey Veen, and myself need to be more involved in web startups.

I actually plan of dropping Analytics in the near future. It’s not giving me anything I can’t get anywhere else. Measure Map has shown me data in a way I have not seen before and has helped me see my blog in a new light. Google’s Analytics will be bigger then Mint or Measure Map but they will not own the market. Mint and Measure Map provide uniquely distinct stats in a way google can’t and in that light they have already won.

measuremap, mint, seaninman, jeffreyveen, analytics, google, stats, counter, blog

I recently had a chance to help Liz Danzico over at AIGA with some podcasts. First one on the block was An Event Apart being held in philly with three of my heroes, Jeffery Zeldman, Eric Meyers, and Jason Santa Maria. While I did little more then cut some audio together and take some work off of Liz’s hands I’m all to excited to be involved with all of it. Check out Jeffery talk about all sorts of crazy stuff and stick around to hear my name at the end! Lets hope this is only the first time my name is in the same place as Jeffery’s.

On another note I’m happy to be helping AIGA get more involved in web design. Hopefully I will be able to not only be able to get the print world to take notice but get more web designers involved in AIGA. So many seem to not join because AIGA doesn’t do much for web design and I hope we can bring the kind of value to AIGA that will give web designers a reason to be more involved.

aiga, podcast, webdesign, zeldman, jasonstantamaria, ericmeyers, lizdanzico, aneventapart

Want to start a tech company and need some help? Although I have not meet them all, here is a list of all the Boston residence that are looking to change the world.

I hope to be able to come back to this list in 5 or 10 years and count the millionaires who changed the world. If you want to join the fun check the Boston Startup Meetup group.

startupschool, boston

I have no idea if I like it or not yet. It’s in the flavor of Gmail with tons of JavaScript. All I do know is that loading 181 feeds from bloglines in to Reader seems to be taking forever. Updates to come as I continue to play with it.

Google Reader

All the Digg.com kids are having fun bashing it. Slashdot too. Lot of people saying they like Bloglines better. In general bloglines is okay but but it’s still painful to use.

Update: In general the feed reader has an interesting focus and one that Keri might agree with. Gmail has shown how google can really get to the heart of what people are trying to do with technology and I think they realty hit the head with their feed reader. The way I use bloglines is almost like bookmarks. If I find an interesting web site I subscribe and that’s how I have 181 feeds in bloglines.

When the feed updates bloglines bolds the feed and tells me how many posts. There are a lot of feeds I am subscribe to that i don’t check daily like Slashdot, Robert Scoble and Metafilter. I have my list of favorite sites and I start checking them until i run out of time. When I come back later I start back at the top of the list of favorite and work my way down again. This way some of the blogs on the bottom of my list rarely to never get checked. Some blogs I don’t really ready everything and like to wait till they have a bunch of post and then quickly look at the headlines to see if anything jumps out. I believe this way lets me see what I’m really interested in as well as get a general knowledge of what some of the people I don’t have time for are talking about.

Google reader really focuses on the individual post. When a blog updates the post is put at the top of your queue and that’s it. This system is a very Post driven reader as opposed to bloglines which is Subscription driven. The current way i read blog is extremely Subscription driven though I can see how a system based on new posts is more focused on reading new content now and not saving it for later. Right now I like saving it for later and then skimming it and stopping if I’m really interested. It does allow for free tagging though I’m not entirely sure how you navigate the tags.

In general I say I like it but there is something about it that I find off putting and I can’t put my finger on it.

Update: TechCrush has a pretty fair review.

reader, google, rss, bloglines, gmail, web2.0, xml, rss, atom, feed, tags

Tantek has a great presentation up on Semantic XHTML. To many designers, my self included, get lost in DIV soup when there are tons of usable tags that come with meaning already built in. If you know CSS and are looking for a way to hone that skill this is the presentation to look to.st

Some stuff like Semantic tables still really confuses me. I know I can get it if I try!

xhtml, tantek, s5

Stack of BooksLibraryThing is a great new app that let you catalog you’re books. It’s in early beta and although it needs a lot of feature as far as search, design, and usability it’s a stellar start to what could be the next great website. It’s so easy to bash sites for what they don’t have and so I’m gonna make an extra effort to highlight why it’s so great.

First, as a designer and developer, I was glowing when I saw the “One-step sign up / sign in”. This is such a usable sign up form that it almost made me cry. Simplicity at it’s best. After you sign in you can go to the Add Books tab and start searching for books. Search results are listed in a side iframe which allows for dynamic loading of results. Wonderful use of AJAX as if you are unhappy with your search you can edit what is already in the search box. Clicking any link will add the book to your library and if you make a mistake you can just X it out directly after you add it. It’s some nice work of javascript. Back on the Catalog page you can view all your books. On the right you can edit the book and add tags (yes tags!) and comments as well as view the Library of Congress Card Data. All the text is searchable through a little javascript find link on the top right. Other features include exporting a CSV format which works with excel and (still being worked on) importing your library from Delicious Library. The free account only lest you index up to 200 books. The best part is that the pro account is a one time fee of $10, which if you have more then 200 books is a very worth while investment.

Okay, now for the downside. While it’s a wonderful site and I love it to death it still has some growing pains it needs to get through before it comes out of beta. One immediate downside is that you can not browse for books. This will hopefully be fixed in the future as for some searches like “Harry Potter” where hundreds of books are listed it’s hard to find the six published books you are looking for. On top of this when you do search for “Harry Potter” you can only view the first 10 or so entries. It shows how many entries are not displayed but gives no option of viewing them. Currently the search function is very particular and searches such as “harry potter and the azkaban” for the “Harry Potter and the Prisoner of Azkaban” returns no results while google doesn’t even blink. A hack (or feature) around this is to separate items with a comma. While “harry potter and the, azkaban” does result in a search for “harry potter and the” and separately “azkaban” it still returns results that are too general to find the book you are looking for. If you do find multiple books you own in one search it’s difficult to add them all with out constant use of the back button. Selection of a book refreshes the page and clears the search bar forcing you to retype your search. The hack around this is to (in FireFox) middle mouse click and open each book in a new tab for multiple selection. This does not seem much easier then using the back button but either way there should be a way for javascript to better handle it.

In the Catalog page there are also a number of usability issues. Top left is a Display list that shows what appears to be the multiple pages your library spans. In fact this a list to change how the books are displayed. In preferences you can edit what fields of information are shown and have multiple presets. This makes changing how you browse the books and is very useful. It’s not clear at all that the Display list has anything to do with the preferences until you mess around with it for a few minutes. One reason it is not clear is also because there is no way to jump to a page of your library. You can only display up to 50 books at a time and you only have the option of next or previous pages. The creator of the site, Tim Spalding, currently has over 400 books and how he pages to the end is beyond me. Not to mention that this sparse navigation is not repeated on the bottom of the page causing even more unneeded effort to get through your massive library.

Another short coming is the inability to add other user’s books to your own collection. How easy it would be to build up your library by finding a user with similar taste and just click all the books that you also own. With a little javascript it could make short work of quickly indexing your library. In this way the site is very focused on what you own and not on looking at what other people own. This is in fact slightly counter intuitive to the social aspect of the site. While books link to amazon (where Tim surly hope to get some amazon affiliate love) there is no current benefit to searching others libraries. You can’t add book to your own list with out searching for them. What the site really needs is a wish list of books. This would get people in the buying mind set and in fact might generate more money for the site, possibly even removing the need for the pro account. Other wise it makes little sense to click the books in your library that you own to buy them again from amazon.

Wow this is really long. While I have been rather harsh to LibraryThing only my love for it could develop this much passion (which is the new way to market of course). When it’s all said and done the site is in beta and is being actively developed. While it is interesting that some elements are done beautifully and some are badly designed or even missing there is every reason to believe that they will be addressed and quickly.

Visit my Library at STHayden!

Update: (11-01-05) Most of the problems listed above have since been fixed.

LibraryThing, library, catalog, ajax, Tim Spalding, Delicious Library, tagging

One of the hard parts of web design is conditioning yourself to out put code that works in all browsers. I don’t know how many people can code web sites with out needing to check to see if the code actually works, but I am not one of those people. Searching the internet for solutions there are services to check how your page looks in other browsers but they are all rather expensive for little college graduates like me.

The other solution that some subscribe to is called the “Avoid the Cutting Edge” approach. For the most part I am appalled by this approach as I can not think of any other industry that advocates staying away from new and better ways of doing business. I feel an “Understand the Cutting Edge” is a better outlook to have. It’s important to know what the new technologies can do. Yet no matter how fancy a new feature might be if it does not work in the dominant browser (currently IE) then the feature is dead in the water. Though I am waiting for the “Killer App” that will propel Firefox to the forefront. Some where out there is an idea that can’t run on IE and if implemented for everyone else the mass exodus of IE would begin.

Until that killer app comes I will continue to push IE as far as it will go and will need all the help I can get checking to make sure my web pages work in every browser with out spending a fortune. A long time coming some one has finally started an open source project that let you see your site in different browsers.

Browsershots
works by letting you submit a URL and it renders out PNGs of what it looks like in different browsers. I’m unclear exactly how it works but from what I surmise it has a central server that dishes out jobs to people who are running the scrip on their computer similar to how Seti@Home works. I downloaded the script and tried to see if I could figure out how to run it off my dreamhost account but did not have any luck. The project is still in it’s baby stages, though it has already handled some very large hits from del.icio.us and the like. When you submit your link it gets put in a pool and it renders out PNGs which you view on their site. This is a very public way of testing websites and would especially be a problem for larger projects that need to be kept secret. It would be nice if there was an easier way to run it privately, which it says it can do but it is not documented well enough for me to figure it out. Perhaps it could use a wiki for documentation to help speed the writing up as I would love to run my own version.

While it tells you the stats that the PNG was rendered with next to the image it does nothing to help you with any display problems you might have. Perhaps it might be a pipe dream but it would be amazing if it pointed out likely problems that different browsers will have with the code (XHTML, javascript, CSS) that you have written. I ran in to my own error as I got the error shown below and can’t figure out why. I get it for both my site and the one I’m working on. Let me know if you figure the problem out because as of right now I’m stumped.

my stange error

FeedLounge launched it’s alpha version (is that really a launch?) just days ago and it’s already generating a lot of buzz. From the sound of it I’m already preparing to make the switch. Reading others talk about rss I’m always surprised by the number of people who use desktop rss readers. As a college student (just graduated) I almost can not imagine that. Living in a world where I was up in campus life or the art labs for hours (or days) on end with out my rss reader handy is a painful thought. For these reason whenever I have been given a choice between desktop and web solutions I always chose web. Bloglines has served me in the way hotmail did the job in the 90s. Bloted, clumsey (ewww frames), hard to find help. I found the Bloglines’ help forum once and have never been able to find my way back to it since. I had hoped a quick redesign would have happened once Ask Jeeves bought it as several of google’s purchases seem to have undergone.

Bloglines was my first love and really introduced me to rss. I will never forget bloglines but I’ve been ready for a new rss relationship for a while now. I knew some one somewhere was working on a better solution and I think FeedLounge will be it.

I’m kinda hot and istead of be original I’ve decided to just ReBlog:

Now available at Rhizome: a Raw RSS feed!

http://rhizome.org/syndicate/raw.rss

Right now this feed re-posts the entire text as posted originally. So it’s suitable for reading, reposting, etc., etc. There are now three separate ways to track the discussion on Raw: by email, by web, and by RSS syndication feed.

I set the feed to track the last 40 items, which right now means it’s a sort of big feed, at 87k. Of course with Raw’s traffic the way it is, the resulting feed only tracks about the last 36 hours worth of posts. I’m considering excerpting some of the bigger posts, and having more posts per feed, if 36 hours isn’t enough … anyway, let me know if you’re using it and have suggestions for it.

This is one of those things that we couldn’t have done three weeks ago. I hope y’all find it useful.

Francis Hwang
Director of Technology
Rhizome.org

I was looking for a guide to making an RSS feed for a site that didn’t have one* and found the internet very much lacking in what it provided. It turns out that there is not as much to choose from as a Google search would lead you to believe. Dennis Pallett seems to be one of the only people to have written a review and it is featured on many, many different sites. While it was very helpful I still thought it was lacking in its explanation.

So I’m going to use Dennis’ code along with how I altered it and go over what I did and what things mean and hopefully paint a clearer picture of how to turn the world in to an RSS feed.

Screen scrapping an RSS feed is based on some simple concepts. Grab each individual post in an array and then grab the title, permalink, and full text out and throw them in their respective RSS tags.

< ?php

$url = “http://www.gailgauthier.com/blogger.html”;

$data = getUrlTEXT($url);

// Get content items
preg_match_all (“/<div class=\”posts\”>([^`]*?)< \/div>/”, $data, $matches);

GetUrlTEXT is a quick function defined at the bottom of the script that uses cURL to get the full text of any url.

preg_match_all and preg_match are both functions that I don’t fully understand but they uses strings of characters called ‘regular expressions’ that tell PHP what text to include and not include (regular expressions guide here).

While I don’t completely understand it I can point some things out. preg_match takes 3 arguments. The first is the string you are looking for, the second is the full text you are searching, and the third is an array that all the found strings are put in.

The first argument are in double quotes as well as forward and back slashes (ex. “/REGULAR EXPRESIONS HERE/”). You also see the start and end of a div as part of the expression. This tells preg_match what to look in between to find the text you are looking for. The regular expressions between the div tags, ([^`]*?), definitely have a meaning but you’ll have to read up to figure it out. Needless to say it works in finding whatever is in between whatever you put on either side of the regular expressions.

Next you can just set up the RSS header information.

// Begin feed
header (“Content-Type: text/xml; charset=ISO-8859-1”);
echo “< ?xml version="1.0" encoding="ISO-8859-1" ?>\n”;
?>

<rss version=”2.0”
xmlns:dc=”http://purl.org/dc/elements/1.1/”
xmlns:content=”http://purl.org/rss/1.0/modules/content/”
xmlns:admin=”http://webns.net/mvcb/”
xmlns:rdf=”http://www.w3.org/1999/02/22-rdf-syntax-ns#”>
<channel>
<title>Original Content—A Gail Gauthier Blog—Latest Content</title>
<description>The latest content from Gail Gauthier (http://www.gailgauthier.com/blogger.html), screen scraped! </description>
<link>http://www.gailgauthier.com/blogger.html</link>
<language>en-us</language>

Nothing hard here, just change the title, description and link. The rest of the information is standard for the RSS file.

Next we loop through the ‘matches’ array we made to extract the title, permalink, full text and author name.

<?

// Loop through each content item
foreach ($matches[0] as $match) {

// First, get title
preg_match (”/<h3>([^`]*?)</h3>/”, $match, $temp);
$title = $temp[‘1’];
$title = strip_tags($title);
$title = trim($title);

// Second, get url
preg_match (”/<span class=”byline”>posted by gail at <a href=”([^`]*?)”>/”, $match, $temp);
$url = $temp[‘1’];
$url = trim($url);

// Third, get text
preg_match (”/< /h3>([^`]*?)<span class=”byline”>/”, $match, $temp);
$text = $temp[‘1’];
$text = trim($text);
$text = str_replace(‘<br />’, ‘<br />’, $text);

// Fourth, and finally, get author
preg_match (”/<span class=”byline”>By ([^`]*?)</span>/”, $match, $temp);
$author = $temp[‘1’];
$author = trim($author);

As you can see getting the title is simple enough, it’s the only thing inside of <h3> tags. The permalink was much harder though since it was not between any specific tags. Instead I used the string ‘<span class=\”byline\”>posted by gail at <a href=\”’ this is obviously a very bad hack to get what you want but not all site will be nicely set up for you to make it in to an RSS feed.

if (!($title ”) && !($text ‘’))
{
// Echo RSS XML
echo “<item>\n”;
echo “\t\t\t<title>” . strip_tags($title) . “</title>\n”;
echo “\t\t\t<link>” . strip_tags($url) . “</link>\n”;
echo “\t\t\t<description>” . strip_tags($text) . “</description>\n”;
echo “\t\t\t<content :encoded>< ![CDATA[ n”;
echo $text . “\n”;
echo ” ]]></content>\n”;
echo “\t\t\t<dc :creator>” . “Gail Gauthier” . “</dc>\n”;
echo “\t\t</item>\n”;
}//end if
}//end foreach

?>

After we find all our information we just need to print it out in RSS format. First I do a quick check that the post has actual information in it and then it is just outputted in to the tags.

That about it for this method of screen scrapping an RSS feed. I’ve heard Kottke mention ways of using the DOM from PHP5 but I am still working on learning DOM in general. You can download this full PHP script here.

* It turns out what Blogger was FTPing to Gail Gauthier’s site was just not listing the ATOM feed in the html. I later found the feed and had to just be happy that I learned something.

AIGA has a great list of symbol signs that are free for every one but I don’t think most people take advantage of it. While a lifesaver for anyone making a map they are useful in other situations as well.

Working on a site that has to do with recycling I thought there was no better place to look for a quick dingbat then the symbol signs that AIGA provides. Much to my surprise they did not have the recycle sign. In fact while the list is a great resource there seems to be a lot of useful symbols not available such as the biohazard, the nuclear, and handicapped symbols.

In an effort to help other designers not have to spend time designing little dingbat here is my list of useful dingbats:

Recycle Symbol eps
Biohazard Symbol eps

my dead hard driveOr at least never buy a Western Digital FireWire/USB 2.0 Combo hard drive. Poking around I’m not ever sure they sell them any more but I don’t even know if I trust Western Digital any more. Reviews by anyone who has had it for any amount of time have not been glowing since its average lifespan is 6months or so. It certainly was the life span of mine.

6 months in it started to make a small sparking noise and you could tell the drive was not spinning up when plugged in. Not to mention the crazy lights that flashed on the front. The drive had no on/off switch, which at the time I did not give much thought, but looking back it might be a sign of flimsy manufacturing. I sent it back only to have it die on me again about 6 months later after the same sparking and flashing.

I tried to save the Hard Drive in the case but after connecting it to my computers power source it started to smoke. I wanted to take a picture but didn’t want to fry my computer.

The Western Digital customer service staff seems very convinced that the slightly over the 1 year service warranty is a good life for a FireWire drive. I seem to feel that 2 six-month hard drives would not have been as much of a problem if I were renting them instead of owning them.

So I’ll just fear technology for a while and hope my shiny new 256 USB Flash drive does not give me any problems.

The New York Public Library has been taking notes from goggle. They have made illuminated manuscripts, historical maps, vintage posters, rare prints and photographs, and illustrated books available for free online. Most seems to be in the public domain and the library is only asking for some sort of fee for high-resolution version of the images, though I can’t find out how much.

As a poor graphic artist new to the field I’m always happy to find new places to get stock photography. Hopefully this is a sign that this digitizing of the past is not just a trend and will really take off. The more power at the tip of my fingers the happier I am.