Tag Archives: WebGL

Feature detection

Because the technologies, primarily WebGL, WebRTC and CSS filters, that we are using in this project are so new, browser support is a substantial issue we have to deal with.

Feature detection is a way to detect whether the current browser supports particular features and to hide, display or load different features depending on the results.

Modernizr is an open source Javascript library which does just that. However, instead of having to use the whole library or going through the trouble of building my own Modernizr build with just the features I want, I instead opted to take just the three tests I needed and run them manually.

The Javascript I am using looks like so:

var Detector = {
webgl: ( function() { try { var canvas = document.createElement('canvas'); return !! window.WebGLRenderingContext && (canvas.getContext('webgl') || canvas.getContext('experimental-webgl')); } catch(e) { return false; } } )(),
webrtc: ( function() { try { return navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia } catch(e) { return false; } } )(),
cssfilter: ( function() { try { var el = document.createElement('div'); el.style.cssText = '-webkit-filter:blur(2px);filter:blur(2px);'; return !!el.style.length && ((document.documentMode === undefined || document.documentMode > 9)); } catch(e) { return false; } } )()

if(Detector.webgl) { document.documentElement.className = document.documentElement.className.replace('no-webgl', 'webgl'); };
if(Detector.webrtc) { document.documentElement.className = document.documentElement.className.replace('no-webrtc', 'webrtc'); };
if(Detector.cssfilter) { document.documentElement.className = document.documentElement.className.replace('no-cssfilter', 'cssfilter'); };

Classes on the HTML tag change from no-webgl to webgl for example if the feature is supported. This allows for the hiding and showing of elements in the CSS.

Features can also be checked within the Javascript by using a simple if-statement: if(Detector.webgl) { // do some WebGL stuff }


If WebGL isn’t supported, the user will not be able to complete the Clone A New Environment process, so we block them from the beginning:Unsupported

The user can also still visit the environment pages, however they will be greeted with a simple capture image of the environment, rather than the full 3D rendered version.

This is a good example of progressive enhancement.


If the user doesn’t support WebRTC, they will not be able to enjoy the webcam gesture feature. The user is warned when they click the webcam button that their browser is not supported.

We discussed whether we should hide the webcam button altogether if the browser isn’t supported, but decided to leave it visible to ensure that all users, regardless of their current browser, know that the feature is available, and may opt to change their browser in order to try out the feature.

CSS Filters

The CSS blur filter is used to add a highly blurred background to the environment page. If the feature wasn’t supported, the image would not be blurred and looked terrible.

If CSS filters aren’t supported, we hide the background completely.

Generating 3D terrain models using Google Elevation API, Google Static Maps and ThreeJS

By combining Google Elevation API, Google Static Maps API and ThreeJS, we can automatically generate and display 3D topographical representations of real-world areas using only code.

The following lists out the step-by-step process we went through in order to produce the outcome.

Getting the elevation data:

  1. Split the world into tiles
  2. Work out which tile the user has selected
  3. Get the latlng (latitude and longitude) boundaries of that tile
  4. Divide the tile into another grid (30×30)
  5. Loop through the grid, row by row, and get the latlng of each of those divisions
  6. Request the elevation of each latlng from Google Elevation API and store in an array

Generating the height map:

A height map is a grey scale image where white means high and black means low.

In RGB, white = 255 and black = 0.

  1. Take the elevation array and find the maximum and minimum value
  2. Normalise all the data so that the minimum value = 0
  3. Calculate the ratio of all the values so that they lie between 0 and 255
  4. Convert the array into a string, with each column delimited by a comma and each new row signified by a dash, and pass the string into a PHP script
  5. The PHP parses and loops through the string to draw out the grey scale image pixel by pixel, which is then saved to disk

Getting the satellite image:

  1. From the selected tile boundaries, calculate the centre latlng of that tile
  2. Request the satellite image using Google Static Maps (proxying through a PHP script to bypass the Cross-Origin Policy)

Rendering the model:

  1. ThreeJS is used to generate a mesh and downloads the height map and the satellite image
  2. The height map is applied as a displacement map
  3. The satellite image is applied as a texture

WebGL: The balancing act between hi-poly 3D models and browser/bandwidth performance

Considering the code required to simply load a 3D OBJ file into the browser amounts to a whopping 450KB, we are having to be very careful about the rest of the data we are loading into the page. Specifically, the size of the 3D models and how that affects the performance of WebGL.

Low poly

I start off with this 100 poly “mountain scene”.


The OBJ file for this model is 14KB. A pretty decent size for the web but not a very smooth looking model. The WebGL performance is of course pretty good at 63fps.

Hi poly

I then decided to push the upper limits, turn on the Turbosmooth modifier and crank up the iterations to 6. This produced me a baby smooth 820,000 poly model.


The exported OBJ is 64MB! Not something we imagine having to download on a wifi connection, let alone a limited 3G one. It took ~9 seconds on my sandbox environment!

Considering the massive file size increase, I was pleasantly surprised to see that the WebGL performance only dropped by 3fps to 60.

Happy medium

Dropping the Turbosmooth iterations down 2 produces a relatively smooth model (undistinguishable without a comparison) with 3,200 polys at 218KB. Still quite a large file but not quite as much as 64MB.



The fps also has increased back up to 63fps.

Future considerations

Some things we may need to consider is providing visual feedback in the form of a spinner or preloader while the OBJ is loading and potentially showing a warning message declaring that this page is going to be bandwidth heavy.