All posts by James Alexander Lee

Static Maps API format size comparison

Sod’s Law, I realised after the deadline that the loading of the environment’s texture could have been sped up! So I’m fixing it now.

The Static Maps API request still has to go through a PHP proxy due to cross-origin issues, however instead of also using the PHP’s GD libary to convert the image from a PNG to a JPG, which is notoriously slow, we can just request the specific format straight from the API using the format parameter.

I have tested all the formats to check which is the smallest:

staticmaps-size-comparison

As expected, the progressive jpg provides the smallest file size.

Changing the format to jpg cut around 1 second off the request time.

Before:before

 

After:after

It might not seem like much, but considering this request actually freezes the page, it is important that it is as quick as possible.

Preloading large content in the browser

Anyone who has experimented with ThreeJS knows that even the minified script is massive, 408KB to be precise (r66).

For a responsive site such as Cell, where we are expecting mobile users to be coming to our site, and therefore slower and more patchy connections, it is even more important to keep the user informed of loading progress.

By using the onprogress event of the Javascript XHR request function, we can provide continuous feedback as the file is downloaded.

The only problem is that while the e.loaded variable works fine, the e.total variable only works for uploads. However we can use a little PHP trickery to pass this value back to the Javascript request.

First of all, we load the script using an XHR request through a PHP file_get_contents() function and write this to a string.

Secondly, we use PHP’s strlen() function to get a rough size in bytes of the file to be downloaded. The following line then returns the size as a comment at the start of the file:
echo "// $s $f";

Next, in the Javascript, since we know the size of the file, we can use a simple regex match to get the first 6 decimals in the request to get the total file size value.

From then onwards we can use these values to calculate the percentage loaded and pass that into a <progress> element to provide the user feedback that we needed.

However

There was however, a major problem with this preloader. Because the file is being loaded through the PHP script, it doesn’t cache like a normal script would. This actually meant that if a user were to visit more than one environment page they would have to download the file again, completely defeating the purpose of the preloader!

In hindsight, some kind of PHP cache may have been able to help with this issue, however I opted with completely removing the preloader script and allowing the browser to handle loading and caching natively.

Generating more accurate 3D models

One of things that regularly surprised us was the quality of the models that we were able to produce.

There were two factors which contributed to this: the quality of the texture and the detail in the height map.

We had already found the highest quality texture that the Google Static Maps API would give us, so the next step was the height maps.

The main issue with the detail of the height maps was finding the perfect balance between clone time and detail.

First of all, I moved the blurring of the height maps from the PHP to the JS thanks to stackblur.js. The gaussian blur filter in PHP is extremely slow, and is lacking in options, which forced me to apply the filter 50 times over in a for-loop, greatly increasing the clone time. On our sandbox, this cut the clone time from several 10s of seconds down to under 3 seconds!

Now that the clone time was reduced, I attempted to increase the detail in the height maps.

The main issue here was the Google Elevation API. It would reject and requests if they were either too large (i.e. requesting too many elevation points in one request) or too frequent. If either were rejected, the request would have to be resent, adding more time to the cloning process.

The first thing I did was to refactor the request code to continuously request the data until it is complete in a self-invoking loop, rather than a set of fixed-size chained requests. This made it much easier to fiddle with the height map detail variables.

With the new request code in place, I was able to crank up the detail variables to much higher values. The original resolution was 20x20px; increasing this to 100x100px produced amazing results. All the crevices and valleys would show up on the model in amazing detail. The only problem being that the clone time shot up to over a minute. We felt that a short clone time greatly outweighed the quality of the models. An average user would not realise unless they had a comparison.

Agreeing upon anything less than 30 seconds of clone time, we settled on a resolution of 40x40px; not a great increase,but an increase nonetheless!

Webcam gestures

As a fun extra, we also added in the ability to take control of and rotate the environment using only gestures in front of the webcam. Waving your hand (or any part of your body) in front of the webcam will allow you to rotate the environment left, right, up and down. Although this is only a very primitive use of the webcam, it is an example of how webcam gestures can be used in the web. It opens up many possibilities of non-peripheral interaction with the browser, especially in the accessibility and games industries.

Combining willy-vvu’s gesture.js with ThreeJS’ OrbitControls, we were able to easily implement this feature. gesture.js handles loading the user’s webcam stream into canvas, detecting skin colour and the direction of gesture.

Pagination

As users clone more and more environments, their profile pages and the Recently Cloned Environments page will become increasingly large. This will negatively effect page load times and page weight especially because each link comes with an image of the environment.

Pagination to the rescue!

This is a very easy to implement solution.

The first step is to limit the number of environments the SQL returns in a single query. We set this to 12. 12 because it divides nicely across breakpoints where we have 1, 2, 3 and 4 in a row.

The second step is to allow the user to select which page they want to view. This was done with a simple ?page=1 URL parameter. We also added PREVIOUS and NEXT buttons which just decrement  and increment the page parameter respectively.

The LIMIT keyword in SQL accepts two parameters: an offset and a limit. The offset is how many rows to skip before returning, and the limit is how many rows to return. Multiplying the page by 12 will give us the offset, which we can then pass into the query to return 12 environments for that page.

This is the very basics of setting up pagination. There are a few more things to do to make it more user friendly and to fix some bugs, such as disabling the PREVIOUS and NEXT buttons when there are no more environments to show, providing an error message if the user enters something other than a number, or a page that is out of bounds, and ensuring the first page is shown if no parameter is present.

Wolfram|Alpha

WolframAlpha is a “computational knowledge engine”.  Unlike a search engine, Wolfram generates output by doing computations from its own internal knowledge base[1]. The user can enter any piece of data, for example a place, a formula or a person’s name, and Wolfram will return any data it thinks is related.

We are using Wolfram to return additional information about an environment. We pass the coordinates of the environment to Wolfram and we display the information it gives back. We have grouped the data we want to display into 3 categories: time, weather and nearby.

wolfram

The main problem with using Wolfram was that due to the output being computational, it didn’t always return exactly the same data, even with the same input. So refreshing the page may cause the data to change. This isn’t the greatest user experience.

We opted to add a small message in the bottom-right corner to warn the user that the information is dependant on data availability.

In hindsight, we could have used some kind of caching system to save the data once it has been loaded by the first user and only update it if it is newer or more populated. However I’m not sure whether Wolfram’s policy would allow us to do that.

Another problem was that the data that Wolfram returns was raw text with no formatting. For example, a list of nearby cities was presented without an obvious delimiter between each place. We had to use some clever regex to split up the lists.


[1] http://www.wolframalpha.com/faqs.html

Feature detection

Because the technologies, primarily WebGL, WebRTC and CSS filters, that we are using in this project are so new, browser support is a substantial issue we have to deal with.

Feature detection is a way to detect whether the current browser supports particular features and to hide, display or load different features depending on the results.

Modernizr is an open source Javascript library which does just that. However, instead of having to use the whole library or going through the trouble of building my own Modernizr build with just the features I want, I instead opted to take just the three tests I needed and run them manually.

The Javascript I am using looks like so:

var Detector = {
webgl: ( function() { try { var canvas = document.createElement('canvas'); return !! window.WebGLRenderingContext && (canvas.getContext('webgl') || canvas.getContext('experimental-webgl')); } catch(e) { return false; } } )(),
webrtc: ( function() { try { return navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia } catch(e) { return false; } } )(),
cssfilter: ( function() { try { var el = document.createElement('div'); el.style.cssText = '-webkit-filter:blur(2px);filter:blur(2px);'; return !!el.style.length && ((document.documentMode === undefined || document.documentMode > 9)); } catch(e) { return false; } } )()
}


if(Detector.webgl) { document.documentElement.className = document.documentElement.className.replace('no-webgl', 'webgl'); };
if(Detector.webrtc) { document.documentElement.className = document.documentElement.className.replace('no-webrtc', 'webrtc'); };
if(Detector.cssfilter) { document.documentElement.className = document.documentElement.className.replace('no-cssfilter', 'cssfilter'); };

Classes on the HTML tag change from no-webgl to webgl for example if the feature is supported. This allows for the hiding and showing of elements in the CSS.

Features can also be checked within the Javascript by using a simple if-statement: if(Detector.webgl) { // do some WebGL stuff }

WebGL

If WebGL isn’t supported, the user will not be able to complete the Clone A New Environment process, so we block them from the beginning:Unsupported

The user can also still visit the environment pages, however they will be greeted with a simple capture image of the environment, rather than the full 3D rendered version.

This is a good example of progressive enhancement.

WebRTC

If the user doesn’t support WebRTC, they will not be able to enjoy the webcam gesture feature. The user is warned when they click the webcam button that their browser is not supported.

We discussed whether we should hide the webcam button altogether if the browser isn’t supported, but decided to leave it visible to ensure that all users, regardless of their current browser, know that the feature is available, and may opt to change their browser in order to try out the feature.

CSS Filters

The CSS blur filter is used to add a highly blurred background to the environment page. If the feature wasn’t supported, the image would not be blurred and looked terrible.

If CSS filters aren’t supported, we hide the background completely.

Generating a preview of a newly cloned environment

Every time a user clones a new environment, a link to that environment will appear on their profile.

In order to make the links more visually appealing, we wanted to add an image along with the link. We could have used the Static Maps API satellite images however that wouldn’t be an accurate representation of what the user would expect to see once they had clicked through. A screenshot of the actual rendered 3D environment is what we needed to display, and luckily canvas makes it relatively easy to generate these captures.

I slotted the ‘capture’ process in between generating the height maps and redirecting to the environment page.

Hiding underneath the same fullscreen preloader as on the Clone A New Environment page, another ThreeJS environment is loaded into the page with a slightly different camera and controls setup to ensure the environment is in the correct position for the capture.

We can use ThreeJS’ WebGLRenderer’s build in capture function to encode the displayed content into a PNG base64 string:

renderer.domElement.toDataURL('image/png');

This string is then passed into a PHP script to convert and save that string as an image to the server.

The PHP also converts the PNG to a JPEG and compresses it. The user’s profile page loads in 12 of these images on each page (see post on Pagination) so it is important to keep the file sizes low.

Environment Capture