I’ve just published a new app for Android Wear smartwatches to the Google Play Store. It’s called Your Photos Watch and it lets you view your personal photos as backgrounds to a specially-designed watch face.

watch_pic

Project

I wrote previously about my first Android Wear watch face. With that project done I turned to something a little more complex. When I first received my smartwatch (a LG G Watch), I thought it might be nice to put some family photos on it – like a high-tech version of the photo in the wallet. I was a little disappointed to find that this isn’t something supported out-of-the-box. Nor could I find any photo apps on the Play store, using the obvious keywords.

This seemed like a missed opportunity as this is something that a smartwatch can uniquely offer. As an app developer there was only one thing to do; make the app myself.

Making the app you want to see is an approach I would recommend to anyone looking to get started with app development. Think of those times you go looking for an app to do a particular thing and don’t find one that works the way you want. Build that app and get it out there! Others will find it useful too, I guarantee it!

App design

shot0

I asked myself what form the app should take. A simple picture viewer was one idea, but traditional apps aren't really a good fit for the tiny screen of smartwatch. A custom watch face seemed like a better approach. The app could show a fresh picture each time the watch wakes from ambient mode (done by the watch when you rotate it to look at it). The user could make a selection photos to show using the companion phone app.

Data

The first technical hurdles were to get the photos from the phone app to the watch face app, and to keep them stored there.

In my first experiments I attempted to download photos directly from the internet on the device. I soon found out that this isn’t supported – directly at least – even though it could have been, since the watch is effectively tethered to the phone, which could proxy an internet connection. I realized this was probably by design, to encourage a thoughtful economy of internet use. I studied the intention of the Android Wear design to see how my plans could be adapted to gel with the API designers’ intentions.

I studied the Data API and after some experiments it was clear that creating a Data Item per photograph would allow me to synchronize from phone to watch. In addition experiments showed that the Data Items persist as long as the companion app remains installed on the phone. This means that in practice the Data API can act as long-term storage for the photos. The watch face and companion apps simply query for all Data Items to see which photographs have been selected by the user. When the user chooses to remove a picture, the Data Item is deleted.

Cropping

subject_top

One issue had yet to be solved. I was expecting to have to down-scale pictures to the resolution of the watch. However, the screen resolution of the smart watches is in a square aspect where most photographs are something like 4:3. It would not be possible to show the whole picture on the full screen of the watch. Some ‘letterboxing’ could be applied to show the entire picture but on an already small watch face this means diminished visual impact. I’d rather crop the photos, but this means typically one quarter of the area of the picture will have to be cut out.

Unfortunately taking the center square of the photo will often crop out the subject of the photo. A common case is when the subject is standing, in a vertical or ‘portrait’ format picture. The face is often in the top quarter of the frame; not the center, as in the picture above.

example

Here’s an example of a landscape format photograph where the subject of the photograph (some strange looking software engineer) is not in the center of the frame. If this photo was cropped for a watch face by taking the center square, half of the subject’s face would be cut from the picture, which in this would be terrible.

example1

The most obvious option would be to have the user select the area to crop in the companion app. However this adds an extra step for users to complete. On mobile, fewer required user interactions is always better.

Face detection

My more ambitious idea was to use some off-the-shelf computer image recognition software to find the faces in each picture. When faces were found the cropped square can be adjusted to ensure as many as possible were in the chosen area.

example3

example4

It says something about technology today when such a computationally-intensive task can be performed on demand on a mobile phone, but a quick web search revealed that this was indeed feasible. The free software application called Open CV emerged as the most likely candidate. An Android port is available and many developers seem to be using it.

[Later edit: Thanks to Ian Lake on the Google+ Android Wear Developers group for pointing me at this article. The existence of this class escaped me in my own research, somewhat embarrassingly. Since the OpenCV solution is working to my satisfaction currently I will leave it as it is for now.]

I had one or two practical issues importing the latest version into the build and getting the face-finding classifiers I had loaded and working. (Check the AutoCropper class in the files in the project on GitHub if you’re curious). Some tweaking of the input parameters was needed to get the balance between false negatives (faces not recognized) and processing time right.

I was pleased to arrive at a solution that works to my satisfaction around 19 times out of 20. The great advantage is the user has to do nothing to have their photos correctly cropped. In fact I would wager that the majority of users never even think about the cropping process. It will work as expected with no involvement from. Exactly how a good app should be.

Visual design

watch1

Making this app I spent a great deal of time tweaking the watch face design. I thought about exposing settings to the user to change the visual appearance. If enough people ask me I might add this in later versions, but design-wise this can be something of a cop out. I would prefer to ship the app with a design that made a strong statement and that I felt worked within the design constraints of the app.

The main problem is this; the watch face should communicate the time quickly and clearly. If it doesn’t, it is failing in its primary function as a watch face, and once the novelty of the photo feature is worn off people will switch to a more easily-read watch.

On the other hand, the design should not cover too much of the user’s photo. If it does, the unique selling proposition of the app is lost.

Yet these two requirements conflict. A bolder design will be easier to read but will obscure too much of the user’s photos.

Faces

shot4

I decided to include both a digital and traditional “analog” watch face.

The analog face took by far the most experimentation. The first problem is choice of colors. These shouldn't be too similar to the background photograph color or the watch face will be unreadable. As I can’t predict the colors of the photographs selected by users this is a problem.

A trick for showing text of other details over a multi-colored background is to use two colors side-by-side that contrast from each other. That way you have created an internal contrast in the image, meaning it should be readable whatever the background is. In the case of the watch face I found a white outline a few pixels thick coupled with a dark blue fill color achieves this contrast. Because the effect is quite striking I softened the design with round rather than square edges on ticks and hands. The high contrast allows the hands to be drawn slightly semi-transparent (around the 80% opaque mark) to subtly reveal a small amount of extra detail under the hands and tick marks.

watch2

The application of a dark drop shadow creates even more contrast and creates a subtle three-dimensional effect with the watch face over the background picture.

The digital face was a lot more straightforward. I chose to display hours and minutes in a large-ish Roboto font in the bottom half of the screen. Because it occupies relatively little of the screen the risk of obscuring the user photos is low. The same contrast-enhancing methods were used in the analog face.

Data sources

shot2

As the user has to supply photos for the watch face, I needed to work out where these would come from. The most obvious and easiest source is the photos that the user has already stored on their phone (e.g. ones they have taken with their phone camera).

Phone

A query of images in the Media Store handles this easily, once the relevant permission is added to the manifest.

Two classes handle the import of photos and handling in a Recycler Adapter. (This project marked the first time I've used the new Recycler View on an Android project.)

One nice feature of this API is that the pictures can be requested in a couple of thumbnail sizes. The MINI_KIND size approximates at 512 x 384, which is actually larger than the typical target watch size of 320 x 320. That way my app doesn't have to downscale massive camera images itself.

Google

shot1

Given that any user with access to the play store will have their phone connected to a Google account, this suggested another obvious photo source. Interestingly there is no modern API to access photos shared on Google Plus, or photos that might have been save with the photos backup feature from one of the user’s previous Android phones. However these photos can be obtained from the aging Picasa API. I say aging, because this uses an earlier version of the GData API and by default returns XML.

This part of the operation turned out to be the trickiest parts of the project, something I had not anticipated at all.

I made several wrong turns trying to parse the XML, for instance by reading a document and using XPath expressions. After a lot of annoyances with namespaces in the XML, I got it working, just. However long delays in the app showed that this is simply far too slow to be done on the phone when 100s of photos are being processed.

Then I tried to use the official GData API Android libraries. However this took my app over the 65,535 symbol limit (Dex limit). I experimented with various ways to work around this such as ProGuard rules to strip unused content, and multi-Dex mode. However ProGuard was breaking OpenCV by stripping symbols it was using (the large amount of native code in OpenCV complicated this). I spent many hours trying to get the ProGuard rules right before deciding I should move on. Multi-dex mode was similarly troublesome.

I then went back to parsing the XML myself, this time using a faster “forward parser” (aka SAX) which interprets the XML stream without building a full version of the document in mmeory (a 'DOM'). This approach is faster but the code is considerably harder to develop and read than XPaths. This probably would have worked but I stubbornly wanted to separate the code that handled the parsing of each photo fragment into a different class, to match the existing structure of the code. Unfortunately, Java’s SAX implementation has a ‘push’ structure – the parser controls flow and calls back a handler with information about the file data – rather than a ‘pull’ structure where the main program would call back the parser to iterate over the data. This means the whole file has to be dealt with by a single handler. I looked around for an XML parser with a ‘pull’ approach but again I felt that I was really wasting time.

Then the penny dropped; I vaguely recalled that older Google feeds can on demand return a form of XML translated into JSON. JSON has a wider selection of parsers on Android including a ‘pull’ parser; android.util.JsonReader. Sure enough, adding ‘alt=json’ to the feed URL returned a translated feed. I could have the best of both worlds; my code could be structured the way I wanted, and a reasonable performance could be obtained too.

Authentication to get access to a user's private data was the remaining challenge. For Google GData services it is possible to authenticate with the aid of the phone's account manager. There is something of a back-and-forth (sometimes called the 'OAuth2 dance') to get the user's email address (required) and an up-to-date access token to pass in to the API.

Facebook

shot3

I could have gone on to add access to lots of other services such as Dropbox, Flikr, Instagram and more. If this app gets super-popular and I get requests to do so, I might. However, there was one obvious candidate given its popularity on Android and its history as a photo-sharing service, and that’s Facebook. By allowing import of photos shared in Facebook I could increase further users chances of finding treasured photos for the watch service (and get some useful experience in the API for myself too!)

The last time I dealt with Facebook as a developer was at work, some time around 2009, when I added some integration widgets to the AdSense publisher controls. I remember this being reasonably straightforward, but even so taking a new look five years later I was amazed; Facebook’s developer experience is incredibly slick, for Android at least. They supply an Android library that integrates with the Facebook Android app making user management (signing in and out), authentication and data collection incredibly easy. I had it running quite quickly; the most complex part was some logic to select the most appropriate-sized version of images from the selection offered.

Another hurdle was that apps that request permission to view user photos need to be manually reviewed by Facebook to ensure they meet the terms of use and offer a good user experience. This was understandable but added a little paperwork and worry (for one thing, would the reviewers even have an Android Wear device to hand to test the app?) In the event, they quickly approved my request.

The app

Like all my apps it’s completely free. It’s on the Play Store from today. This one is licensed under Apache 2, and the source can be found on GitHub.

Feel free to get in touch on jimblackler@gmail.com if you have any thoughts or queries about the app.

feature

I made a watch face for Anrdoid Wear which is in the Play Store now. It’s inspired by the clock tower at the Palace of Westminster, London, better known as Big Ben.

hero_watch

Idea

Recently I was given an LG G Watch. As an introduction to Android Wear I loved it, I found the device much more useful than I was expecting. Not having to get my phone out of my pocket to check my notifications is surprisingly useful.

I also loved the ability to chose watch faces I’ve previously written about an animated Big Ben effect.This was written in SVG for use on the web, but I had a ‘light bulb moment’ when I realized that this existing could be adapted for Anrdoid Wear. Most of the difficult work (preparing the digital images and proving the concept would work) had been done.

big_ben_2

Project

The first job was to update to Android Studio (I’ve been a long-standing Eclipse user to date) and to learn how to code for Android Wear. I was pleased to discover that it’s not too different to coding for a phone, it’s basically Android on the device, slightly cut down for the form factor. The hardest part for me was realizing how the two app executable (APKs) you have to deliver (one for phone, one for app) work together and how deployment (e.g. to Play Store) was supposed to be done.

Design

twoI studied the design guidelines and realized one adjustment would need to be made; introduction of an ‘ambient mode’ where the screen was mostly black. This meant drawing new stylized versions of the watch face and hands, which I did in Inkscape. I tried to capture the iconic Gothic style of the original while using outline effects so as to draw very little content in ambient mode.

Code-wise it was then a matter of studying the watch face samples and producing my own version. I used Android canvas to scale, rotate and overlay the bitmap watch face elements.

Check it out

The app can be download from the Play Store now. It requires an Android Wear watch with Android 5 (Lollipop) or better.

Source for the project is available in GitHub.

IMG_20141220_145738

I regularly read Reddit on a phone, and I’ve come to admire a particular bot autowikibot. When someone posts a link to a Wikipedia article the bot replies with an excerpt from the article directly into the conversation. Without the bot replying, in order to understand why the link was posted I’d have to follow the link – taking me out of the app, incurring a delay and data use.

I noticed that there was a similar problem to be solved with YouTube links. Reddit users regularly post these but unlike Wikipedia links there’s nothing in the URL to indicate what the article might be about, just a string of digits such as “dQw4w9WgXcQ”. When I’m reading a conversation on Reddit and someone posts a YouTube link without explanation it’s frustrating; I have to leave the Reddit app to know why that post was made.

I built a bot in Python using the Reddit API (via the superb PRAW), the Google API for YouTube for video statistics, all hosted on the cloud application platform Heroku.

bot2

It’s up and running now, and can be seen in action at http://www.reddit.com/user/youtubefactsbot

What is Reddit and what are bots?

Redit is a massive and hugely popular discussion website. It has hundreds of millions of users, and thousands of subreddits (discussion pages). As well as internet users talking amongst themselves, the Reddit API allows the creation of ‘bots’. These can look like normal Reddit accounts, but their activity is controlled by an automated processes. The bots join in conversations; they typically react to a phrase and reply in order to provide information or amusement.

How does it work?

mobile

A bot is really nothing more than a manually-registered Reddit account being controlled through the API by a long-running program on a computer somewhere. Comments are fetched and and analysed by the program; if it chooses to reply, it does so through a POST to the API.

PRAW makes this process very easy with a helper method called comment_stream(). This allows you to get a look at submissions and comments as they are posted. Provided not much extra processing is needed, it’s feasible to keep up with the comment stream and react to every post.

My bot simply runs a regular expression over the comment to extract YouTube links, gets the video ID and fetches the data from the YouTube API. Most of the logic in the app is around formatting the comments and obeying Bot Etiquette.

Bot etiquette

From the outset with this project I wanted to ensure that the bot would be found useful and not annoying by Reddit users. It’s important to remember that a bot is a machine process injecting itself into a human conversation. This is one reason why bots have a mixed reputation on Reddit, even though subreddit moderators can choose to ban individual bots (very likely if they cause annoyance). I took care to err on the side of caution with the bot’s interaction.

At the time of writing, the bot:

  • Only replies to very short comments; normally just a raw link without context.

  • Doesn’t reply to a comment if there’s a reply already, so as not to break the flow of the conversation.

  • Attempts to delete any of its comments if they are downvoted (effectively allowing readers to delete bot comments).

  • Only adds a single comment to any given submission (thread); except in very large threads.

In addition, the creator of autowikibot made not only the source for his bot online, but also (via Reddit’s wiki feature) the user and subreddit blacklist. These are users who have requested the bot not reply to them, and subreddits that have banned the bot. By applying the same blacklists to the youtubefacts bot from day one I was able to reduce the risk that the bot would comment where it wasn’t wanted.

Mainly I took care to make the information posted by the bot about the videos as information-dense as possible in order to justify its position in the threads. I have a ton of information available from the API, but a lot of it (such as bit rate, comment count and more) simply would be interesting enough to justify it’s place in the thread. I decided not even to include the channel (YouTube poster) name. I include the video name, running time, view count and posting date. I also include the first line of the description as it often adds useful information about the context. I don’t do this if it contains a link so as not to potentially introduce spammy links into Reddit threads, and also because those kinds of comments tend to be promotional rather than informative.

Source

The source is also online at https://github.com/jimblackler/youtubefactsbot and licensed under GPL.

If you want your own implementation you’ll have to register applications on both Reddit and Google API (YouTube), sign into both accounts locally, and upload the secrets and tokens folder to your application on Heroku.

Hope you like the bot.

bigpicHere’s a web-based version of Conway’s Game of Life. The rules are the standard recipe but I’ve added colour a color effect to the cells. At startup the cells are assigned colors with random hues. Newly-created cells take the color most similar to the neighbouring cells. When cells die I leave a trace of their color in the vacant cell background.

Adding color makes the simulation more striking, and it also vividly illustrates how local patches of the simulation share common ancestry.

About Life

Conway’s Game of Life is a wonder of mathematics. Invented by mathematician John Conway in the 1970s, it isn’t really a game but a simulation experiment involving squares in a grid (‘cells’) that are either ‘live’ or ‘dead’. Any live cell without two or thee neighboring live cells (including diagonals) becomes a dead cell on the next turn (or ‘generation’). Any dead cell with three neighbouring live cells becomes live on the next generation.

trad_glider

From those very simple rules arises a fascinating array of complex behavior. The menagerie of structures and creatures that can emerge has been studied continuously since Conway invented his game. You can read all about Oscillators, Spaceships, Reflectors and more on LifeWiki and elsewhere. The small library of Life shapes that I include in my simulation were borrowed from the excellent resources on LifeWiki.

Life is one of those ideas that is so simple it begs for variations. For my project I decided to make an implementation that kept the basic Life rules but added colors to the cells.

Demo

My demo is written in JavaScript and uses Canvas for the graphics. You can vary the size of the grid and the update rate with the links at the top of the page, as well as toggle the color effect.

starpic

When a cell is newly-created a hue is determined which is formed by summing a 2D vector for each of the three live neighbours’ hues; representing the position on the circumference of a circle where the angle represents the continuous hue value. The hue of the new cell is taken as the angle between the origin and the end of the combined vectors (using Math.atan2). This way an ‘average’ can be obtained that does not tend towards any locality on the color spectrum.

As the simulation is so computationally extensive it can act as a benchmark of sorts between browsers. At the time of writing, Safari on OS X was giving the fastest results.

The source is available on GitHub and has a GNU license.

As always, any comments or enquiries are welcome here or at jimblackler@gmail.com.

This tool creates SVG (Standard Vector Graphics) files to illustrate information structured as a basic tree.

Simple tree image

Here I define tree as an ordered graph without loops where every node has zero or one ‘parent’ nodes in the same tree.

It’s a very common structure in computing and will be familiar to most as the structure of folders on a personal computer (as seen in the ‘Unix’ example). It’s also the structure of classes in a single-inheritance object oriented programming language (as seen in the ‘Java’ example).

However lots of real world data can be formatted this way too. For instance an ‘org chart’ of the hierarchy of an organization (because everyone has a boss, apart from the boss of the company).

Project

I wanted to visualize binary search trees to help understand a problem, but I couldn’t find a simple tool to take tree data as text and to output it as a line drawing. I also became interested in the problem of arranging arbitrary tree arrangements neatly, in the original breadthwise order, without overlapping and with sensible spacing between elements. So, I decided to make a tree diagram tool myself.

It may be useful as a tool to generate diagrams of tree structures for documents, presentations and so forth, so I’ve put it online.

Example tree showing character classifications in an online game

The tool is written in pure JavaScript and creates SVGs that all modern browsers can render. This means the scripts can also be dropped directly into web applications that create tree data on the fly, as a reporting/visualization tool. The source is freely licensed under GPL and placed on GitHub.

Use

To make your own diagrams all you have to do is visit one of the demo pages and edit the data (specified in classic tabular style, e.g:

My Root
  My First Child
  My Grandchild
  My Second Child

This will generate the diagram below:

Custom tree image

You can edit the options (provided as editable JSON in an edit box on the page) to customize the image in various ways. For instance setting "flipXY": 1 will convert the image to a horizontal diagram.

Custom horizontal tree image

I won’t detail all the options here because it’s designed for experimentation; just play with the values and see what you get. You can change the relative size and margins of the nodes, line spacing in the labels, arrow size and direction and more.

To change the colors or line styles of the nodes and arrows, or the label font size and style, simply edit the CSS data on the page. For instance, editing the CSS as follows…

text {
  text-anchor: middle;
  font-size: x-small;
  fill: white;
  }

rect {
  fill: green;
  stroke: black;
  stroke-width: 0;
  }

.. would result in this diagram:Custom style tree

Method

The diagram generator has the task of building diagrams that have elements positioned and spaced sensibly, without nodes or lines overlapping, and making the best use of available space (the nominated rectangle the diagram may occupy). Element positions need to be considered as a whole, as repositioning any element will have a knock-on effect on any other elements it might now overlap, free space to better position, and so on.

After experimentation I developed a relatively simple method with all nodes having their vertical position fixed in a level based on the distance to their tree root. The horizontal positions are in order of the breath order of the nodes in the tree, but other than that, they are allowed to move horizontally. The method relies on identifying the one level which has least potential to be repositioned, laying this out with regular spacing and fitting the other rows around it.

Relative values are given in the options for the widths of nodes and the spaces between sibling nodes (nodes that share the same parent) and cousin nodes (nodes that are the same distance from their roots but don’t share the same parent).

  • The tree structure is converted into an array of levels to be displayed in rows.

  • Each level is measured for its minimum width given the spacing ratios in the
    options.

  • The row that occupies the most screen width is nominated as the fixed row.

  • Fixed row

  • All rows between the fixed row up to an including the root are now considered in turn. For nodes that have children (in the level below) they are given an ideal horizontal position as the average (horizontally) of their children.

  • A ‘sweep’ process then travels left to right across the nodes and forcing the rightmost node of each considered pair further right to ensure that it is not positioned too closely to its predecessor. As this may push nodes outside the diagram area, a return sweep performs the same operation from right to left with the rightmost element constrained to the available horizontal space.

  • These positions won’t result in overlapping elements. However because of the way the sweep operates, elements will often result in nodes positioned immediately to the right of neighbours when there is a large gap remaining that could be occupied. In order to have nodes occupy the central position in the available space, a third and fourth sweep are performed in mirror-image of the first two (right to left then left to right). Naturally after sweep four the elements are often positioned immediately to the left of neighbours. So, the positions after sweep two and sweep four are averaged to determine the final position of the nodes; non- overlapping and evenly positioned in the available space.

  • Illustration of the sweep process

  • A similar operation is performed on all the rows below the fixed row traveling downwards. On this occasion, the ideal horizontal position for the nodes is an even distribution of children underneath, with the group centered on their parent node.

Contact

I hope some find the tool useful or interesting. Check out the site and as ever, feel free to contact me on the comments or at jimblackler@gmail.com, or make requests via the GitHub page.

Often I’m browsing the web, and I’m curious as to what others have thought about the article’s I’m reading. Many pages have comment sections but these are usually mostly occupied by that site’s regulars rather than a cross-section of internet users in general. For a while I’ve wanted to make an app that lets you view comments about an article that have originated elsewhere.

So here’s my third Chrome app (after the solitaire game, and an earlier web-dev tool URLGuide, thatI didn’t blog about). It’s an extension that lets you see which Google+ users have shared pages you browse to. You can read the comments on the shares and their replies, and there’s nothing stopping you from joining in the conversation, or circling the other users who may share your interests.

As you browse the web, an icon will appear in your Chrome address bar linking you to discussion between Google+ users about that page. If the icon is red it means it’s a ‘hot topic’ with a number of replies from G+ users.

Like all Chrome apps it’s JavaScript and HTML. It’s pretty standard stuff; you can view the source on the extension using developer mode and it’s not encrypted. I’ve taken care over privacy. The app uses a direct, secure connection to the Google+ API; this can’t be eavesdropped, but I also take care to only make queries about public sites.

I hope you enjoy the extension. You can find it here.

Link to the demo

This next project is a bit of fun for the new year. It’s a maze generator written with JavaScript/HTML5 Canvas. I was thinking about possible algorithms to generate classic maze puzzles and thought it might be interesting to write one.

The algorithm is simple at heart. It’s from a class of puzzle generators that work in conjunction with a solver algorithm; changing the puzzle step by step while continuously trying to solve it to see if it’s still viable.

Method

The maze generator starts with a blank grid. One in four squares are blocked with base ‘columns’. It keeps a list in memory of all the squares on the grid that could be filled in, the ‘blockable’ squares. It then solves the grid in its current state by a classic flood-fill algorithm.

In the solve operation every square that forms part of the shortest path (or joint shortest paths) from top left to bottom right is detected (this are the squares painted as red in the demo visualizer). Because in the early phases there are many equally good routes, it is typical that much of the maze can be painted red.

These squares are then searched in random order to find one that is still in the ‘blockable’ list. It’s removed from the list, the grid square is blocked, and the solver re-run. If the solver reports that every blank square on the grid is no longer accessible from every other square, that piece cannot be blocked without breaking the puzzle. We ‘unblock’ it, but we don’t put it back in the list because it can *never* be blocked without breaking the puzzle. If the solver reports the puzzle is still universally accessible, the current grid is rendered to the screen and the solver repeats.

When there are no squares remaining from the path that are in the blockable list, the algorithm just attempts to block any random squares in the list (with the same accessibility checks) until nothing remains in that list. Then the puzzle is complete.

The reason that squares along the current best route are considered first is that in doing so the length of the solution path can only be increased. Without this constraint virtually every maze ends up with a solution that is too short and almost always forms a diagonal path roughly direct from top left to bottom right.

Please take a look at the demo. Different sized puzzles can be selected (you can customize the values if you’d like to edit the URL parameters). When the puzzle is complete the red solution indicator can be turned on and off. If you’d like to look at the source you can do so in Chrome Web Developer tools or similar tools in other browsers.

Link to the demo

Click here for the Chrome application

Click here to play online

Does the world really need yet another implementation of Solitaire for the browser? Yes, absolutely!

When I acquired a Chromebook recently I looked for Solitaire Chrome apps (my gaming tastes aren’t particularly sophisticated) only to be disappointed by over-presented apps heavy in Flash and Ad-laden.

I just wanted an implementation of regular Klondike Solitaire that matched the simplicity of the old Windows-bundled version. The one played for years by bored office workers worldwide before the internet took over as principal time-wasting activity.

Also I wanted one that would work offline (when user have no internet connection). Solitaire is an ideal activity on the plane or train.

Another motivation was simply that I felt like writing a card game having never developed one before. Also I wanted to understand how feasible the idea of offline browser applications is.

Here it is, a minimal implementation of classic Klondike Solitaire to be played in the browser, developed with HTML5 technology including JavaScript. It’s also presented as a working Chrome application, that can be installed to Chrome with just two clicks.

It was developed using Eclipse and Chrome Developer Tools. Cards were adapted in Photoshop and Inkscape from public domain and Creative Commons resources.

Click here for the Chrome application

Click here to play online

A few years ago there was a lot of talk about web ‘mash ups’; using web APIs to combine web applications.

Photos Calendar is an old-school mash up that brings together Google+, Picasa and Google Calendar, all using a Java web app running on Google App Engine.

To try it out click here. Or read on for the inspiration for the app, and how it was built.

Concept

Now that Android enables instant upload of photos to the cloud, I’ve been sharing lots of pictures taken with my phone onto Google+. Also, as I’ve used Picasa Web Albums for some time, the cloud now has quite a bit of my photo history.

Another web app I’ve been using for years is Google Calendar. It occurred to me that it would be interesting to see links to my cloud-based photos appearing on Google Calendar, in the form of events placed at the time the pictures were taken. In some cases these would appear next to the original appointment from the event.

As well as making the task of finding particular pictures easier, it also may help you to recall what you were doing on a particular day by checking your calendar. But above all it would simply add an interesting new way of browsing your existing data.

Development

I believed the idea was practical. The Picasa service is the storage engine for Google+ picture posts, and it has a public API. Also, Google Calendar has an API and other ways that calendar data can be imported to the app.

In fact, developing the app wasn’t particularly hard. The toughest bits were handling the Picasa API’s authentication mechanism via App Engine, fixing ambiguities with the time zone on photo time stamps, and creating the pages to explain to users what to expect (since the app has no user interface of its own).

App Engine

Ideally I wanted the app to run on Google App Engine, because this service provides a scalable and robust way of serving simple apps.

I considered the best way to get the events to appear on Google Calendar. I could have used Calendar’s gData API to import events to users’ existing calendars. However this approach would have modified the main event calendar rather than adding an extra layer, and would have created the need to synchronize data on a schedule.

It’s much easier to serve an iCal calendar on demand and have users add this to their calendars as an extra layer. iCal is the name of the built-in calendar app on Macs, and it’s format, of the same name, is the the main format used by Google Calendar to import calendars.

Fortunately App Engine allows virtually any content to be served by implementing the HttpServlet class. If an App Engine app can serve iCal format calendar data, extra calendar data can simply be imported into Google Calendar by adding a URL. This means that for most users the app can be set up with just a few clicks. Users can also toggle display of the photos calendar on and off as they see fit.

iCal4j

I discovered that the excellent iCal4j library could work on App Engine. Not all Java libraries will work on App Engine ‘out of the box’ because of restrictions imposed by the platform, such as a lack of threading support. iCal4j allows Java apps to create data in the iCal format. From the developer’s perspective, adding events to an iCal4j calendar is pretty simple.

If the calendar service is also a client to the Picasa API, the two services can be bridged. The servlet queries the API, iterates the results, and spits out a calendar with one event per photo.

Authentication

So how to enable the servlet to make API queries? The problem is, to serve a user’s private data Picasa requires an authentication token to be sent with every query. This token is obtained as part of an authentication process where the user is sent to the Google website in order to agree to give the application access to his or her data.

If the iCal service is to make Picasa queries then it will need this token. But as the service is to serve many users, how is it to know which token to use to serve requests? The answer is to have the calendar URI carry the token, meaning that the service need stores no information about each user. The information can be carried in the URL encoded in a ‘data’ parameter. The URL is prepared by the servlet that receives the data following the authentication request, for each user, embedded with all information required to create the calendar layer for each user.

However it is necessary for security reasons to encrypt the authentication token. Otherwise, anyone who can access the calendar URL could also reveal the Picasa authentication token which could allow them to make any Picasa API calls with full permissions granted by the user. If it is encrypted with a key private to the app this attack vector is heavily reduced. There is no way to avoid discovery of the calendar URL allowing access to the photos calendar layer to anyone with that URI, but there are no way that these URLs will be revealed to outsiders in normal use of the app.

Time Stamps

It’s crucial for this app that the pictures events are created on the correct date and times, since that’s all the app does! It’s easy to get the timestamp the pictures were created, Picasa serves this information from the camera EXIF (picture metadata) data accompanying the picture. The problem is that these time stamps are local to the time they were taken, but no time zone is given. This is a big problem when combined with the fact that iCal calendars imported into Google Calendar have to be given an explicit time zone. If I don’t get the combination right, pictures could appear up to 24 hours removed from the actual time the picture was taken.

Using location data from the EXIF might offer one solution, but location data is usually removed from the pictures at upload time for security reasons. I have no option but to assume that all pictures are taken in the time zone of the user.

It’s actually quite difficult to get the time zone of the current user, even though the app has their Picasa credentials. The information does not appear to be available through the general Google Account, but there is a roundabout way of getting it through the Calendar API (by looking at the timezone on the default calendar). However I didn’t want the time overhead of using this API, or to ask for another permission that users may be dubious about, and more tokens that would need to be carried on the URL.

My fallback was to use a technique I’d employed on my Events Clock project. It is possible to use JavaScript to make an educated guess as to the user’s current time zone based on the offset applied to the Date object.

I used an excellent snippet from the Redfin Developer’s Blog to detect the time zone. This, being JavaScript, happens on the client side, but the data is stored in a cookie for later retrieval on the server side following successful authentication of the Picasa API. It’s an approximation, and it relies on the assumption that the user’s browser timezone is the same as their camera timezone, but it got good results in my tests.

You can see the app for yourself here: https://photoscalendar.appspot.com

The app is free software under a GPL license. The source can be found here https://github.com/jimblackler/PhotosCalendar

Here’s the results of an experiment into building dynamic SVG for direct viewing in web browsers. If you want to see the animations just click here or here.

The hands animate and the time on the clock should be correct for your time zone!

SVG stands for Scalable Vector Graphics, and is an advanced image format that has existed for over ten years. However now it is really coming into its own as almost all the modern desktop browsers have the ability to display SVG directly in pages. What makes the technology really interesting is that SVGs can contain JavaScripts that manipulate the images in a very similar way to dynamic HTML. This means a variety of interesting visual applications are possible all available from a single click link; and not an ounce of Flash in sight.

Building on from the fortune cookie generator I build last month, I wanted to build in interesting new SVG demo to showcase what you might call ‘live Photoshopping'; generation of custom photo realistic images. It seems traditional to make a clock when learning SVG, so I played on this idea a little with a plan for a photorealistic, animated image of Big Ben. The hands were to move in real time and show the correct time for the user’s browser.

Big Ben (more pedantically titled the Clock Tower at the Palace of Westminster) is one of the most famous clocks in the world. Its an iconic symbol of my home city London; so what better clock to chose for the animation.

As often with these projects the job began with a search on Flikr for high quality images with licenses permitting remixing. I found two great images, flickr.com/deerhake11/3903926815 flickr.com/aldaron/536362686 Many thanks guys for the great images and for putting Creative Commons licenses on them.

Next step was to fire up PhotoShop and begin the process of removing the clock hands and shadows. Before and after pictures are here. Then the same tool had to be employed to recreate the hands as assets that could be overlaid and rotated to show the correct time.

All done, I used Inkscape to help me line up all the images and try out various filter effects to recreate a subtle shadow on the clock hands. Once I was happy with the results I built up my own SVG file and added the scripts to animate the clock hands.

One problem that needed solving was that the animation shouldn’t be shown as its child images were being downloaded into the browser. This resulted in a weird effect where the user would see the image being built up from the constituent elements and the illusion would be spoiled from the beginning. Sadly I couldn’t find an SVG equivalent of window.onload to delay code execution until the whole image was ready. My workaround was to include the child images as base64 encoded inlined data which ensured as a side effect that all assets would be available at the same time. There may be better workarounds if I had the inclination to investigate.

Because the drop shadow effect uses SVG filters, the image won’t look as realistic on browsers that don’t have full SVG support. These include Safari and Internet Explorer 9. Earlier version of Internet Explorer don’t support SVG at all, nor, weirdly, does the Android web browser.

Here are both links again.

Distant view

Close up view

« Older entries