Static Maps API format size comparison

Sod’s Law, I realised after the deadline that the loading of the environment’s texture could have been sped up! So I’m fixing it now.

The Static Maps API request still has to go through a PHP proxy due to cross-origin issues, however instead of also using the PHP’s GD libary to convert the image from a PNG to a JPG, which is notoriously slow, we can just request the specific format straight from the API using the format parameter.

I have tested all the formats to check which is the smallest:


As expected, the progressive jpg provides the smallest file size.

Changing the format to jpg cut around 1 second off the request time.




It might not seem like much, but considering this request actually freezes the page, it is important that it is as quick as possible.

Grid systems in web design

A grid system was used whilst designing

The decision was made to use a grid system whilst designing the layout of the website. A 20×20 pixel grid with 12 columns was chosen. Using a grid aided in the neat placement of objects in the design.

It was deemed necessary to use a grid layout system since conveying the purpose and explaining the idea of the website to people was the most important challenge of the design. Using a grid layout allows the web page designs to become more structured. This structure enhances the user experience by implying that thought has been given to how the website is presenting information to the user and therefore gives the user confidence in the website.

Another advantage of using a grid to design with is that it can be transferable to the development stage of the website. When guidelines and sizes have been set for UI elements in the design stage, laying out the pages with code becomes much quicker. It is also easier to generate more seamless visuals from Photoshop mock ups to the coded webpage simply by following the dimensions of the grid.

Further reading an great resources:

Preloading large content in the browser

Anyone who has experimented with ThreeJS knows that even the minified script is massive, 408KB to be precise (r66).

For a responsive site such as Cell, where we are expecting mobile users to be coming to our site, and therefore slower and more patchy connections, it is even more important to keep the user informed of loading progress.

By using the onprogress event of the Javascript XHR request function, we can provide continuous feedback as the file is downloaded.

The only problem is that while the e.loaded variable works fine, the variable only works for uploads. However we can use a little PHP trickery to pass this value back to the Javascript request.

First of all, we load the script using an XHR request through a PHP file_get_contents() function and write this to a string.

Secondly, we use PHP’s strlen() function to get a rough size in bytes of the file to be downloaded. The following line then returns the size as a comment at the start of the file:
echo "// $s $f";

Next, in the Javascript, since we know the size of the file, we can use a simple regex match to get the first 6 decimals in the request to get the total file size value.

From then onwards we can use these values to calculate the percentage loaded and pass that into a <progress> element to provide the user feedback that we needed.


There was however, a major problem with this preloader. Because the file is being loaded through the PHP script, it doesn’t cache like a normal script would. This actually meant that if a user were to visit more than one environment page they would have to download the file again, completely defeating the purpose of the preloader!

In hindsight, some kind of PHP cache may have been able to help with this issue, however I opted with completely removing the preloader script and allowing the browser to handle loading and caching natively.

Generating more accurate 3D models

One of things that regularly surprised us was the quality of the models that we were able to produce.

There were two factors which contributed to this: the quality of the texture and the detail in the height map.

We had already found the highest quality texture that the Google Static Maps API would give us, so the next step was the height maps.

The main issue with the detail of the height maps was finding the perfect balance between clone time and detail.

First of all, I moved the blurring of the height maps from the PHP to the JS thanks to stackblur.js. The gaussian blur filter in PHP is extremely slow, and is lacking in options, which forced me to apply the filter 50 times over in a for-loop, greatly increasing the clone time. On our sandbox, this cut the clone time from several 10s of seconds down to under 3 seconds!

Now that the clone time was reduced, I attempted to increase the detail in the height maps.

The main issue here was the Google Elevation API. It would reject and requests if they were either too large (i.e. requesting too many elevation points in one request) or too frequent. If either were rejected, the request would have to be resent, adding more time to the cloning process.

The first thing I did was to refactor the request code to continuously request the data until it is complete in a self-invoking loop, rather than a set of fixed-size chained requests. This made it much easier to fiddle with the height map detail variables.

With the new request code in place, I was able to crank up the detail variables to much higher values. The original resolution was 20x20px; increasing this to 100x100px produced amazing results. All the crevices and valleys would show up on the model in amazing detail. The only problem being that the clone time shot up to over a minute. We felt that a short clone time greatly outweighed the quality of the models. An average user would not realise unless they had a comparison.

Agreeing upon anything less than 30 seconds of clone time, we settled on a resolution of 40x40px; not a great increase,but an increase nonetheless!

Webcam gestures

As a fun extra, we also added in the ability to take control of and rotate the environment using only gestures in front of the webcam. Waving your hand (or any part of your body) in front of the webcam will allow you to rotate the environment left, right, up and down. Although this is only a very primitive use of the webcam, it is an example of how webcam gestures can be used in the web. It opens up many possibilities of non-peripheral interaction with the browser, especially in the accessibility and games industries.

Combining willy-vvu’s gesture.js with ThreeJS’ OrbitControls, we were able to easily implement this feature. gesture.js handles loading the user’s webcam stream into canvas, detecting skin colour and the direction of gesture.


As users clone more and more environments, their profile pages and the Recently Cloned Environments page will become increasingly large. This will negatively effect page load times and page weight especially because each link comes with an image of the environment.

Pagination to the rescue!

This is a very easy to implement solution.

The first step is to limit the number of environments the SQL returns in a single query. We set this to 12. 12 because it divides nicely across breakpoints where we have 1, 2, 3 and 4 in a row.

The second step is to allow the user to select which page they want to view. This was done with a simple ?page=1 URL parameter. We also added PREVIOUS and NEXT buttons which just decrement  and increment the page parameter respectively.

The LIMIT keyword in SQL accepts two parameters: an offset and a limit. The offset is how many rows to skip before returning, and the limit is how many rows to return. Multiplying the page by 12 will give us the offset, which we can then pass into the query to return 12 environments for that page.

This is the very basics of setting up pagination. There are a few more things to do to make it more user friendly and to fix some bugs, such as disabling the PREVIOUS and NEXT buttons when there are no more environments to show, providing an error message if the user enters something other than a number, or a page that is out of bounds, and ensuring the first page is shown if no parameter is present.


WolframAlpha is a “computational knowledge engine”.  Unlike a search engine, Wolfram generates output by doing computations from its own internal knowledge base[1]. The user can enter any piece of data, for example a place, a formula or a person’s name, and Wolfram will return any data it thinks is related.

We are using Wolfram to return additional information about an environment. We pass the coordinates of the environment to Wolfram and we display the information it gives back. We have grouped the data we want to display into 3 categories: time, weather and nearby.


The main problem with using Wolfram was that due to the output being computational, it didn’t always return exactly the same data, even with the same input. So refreshing the page may cause the data to change. This isn’t the greatest user experience.

We opted to add a small message in the bottom-right corner to warn the user that the information is dependant on data availability.

In hindsight, we could have used some kind of caching system to save the data once it has been loaded by the first user and only update it if it is newer or more populated. However I’m not sure whether Wolfram’s policy would allow us to do that.

Another problem was that the data that Wolfram returns was raw text with no formatting. For example, a list of nearby cities was presented without an obvious delimiter between each place. We had to use some clever regex to split up the lists.


Logo design

The design process for the logo was started once the general design style for the project had been decided on. The aim was to design a logo that looked attractive whilst also being symbolic of Cell Industries. It was decided that the logo must be simplistic enough to match the flat UI design style chosen for the website and be transferable from web to print design.

The process started with brainstorming words related to Cell Industries as a team and then deciding which were the most relevant and important ones. Words such as ‘cell’, ‘cloning’, ‘environments’, ‘cube’, ‘sci-fi’, ‘futuristic’, ‘future’, ‘preservation’ and ‘forever’ were discussed until the decision that ultimately ‘cell’, ‘cloning’ and ‘preservation’ represented the company best. With these words in mind the process of sketching a logo started with a shape inspired by the symbol for infinity and then the shape of two cells splitting or ‘cloning’. The infinity symbol was chosen as it suggests something that never ends; when an environment is cloned it is preserved forever.


Example of logo sketches

Following the sketching stage, mock ups of the logo in Illustrator were made. Illustrator was used to make sure the logo was a vector graphic to make it scalable to any size as well as making it easier to be made into an SVG. The final iteration of the logo became a single shape that is symbolic of the company.

Paired with the logo for Cell Industries a logo for Project Titan was also needed. Using the same method of making the logo symbolic of what it’s representing, the logo for Project Titan was designed to represent the Clone Cube with the ‘clone’ inside of it whilst also adhering to the flat style decided on.

logo.png          project-titan.png

The block symbol style of the logos allows for them to be easily transferable from web to print design and also means they can be made into any colour.