Face Detection using HTML5, Javascript, Webrtc, Websockets, Jetty and OpenCV
Join the DZone community and get the full member experience.
Join For Freethrough html5 and the corresponding standards, modern browsers get more standarized features with every release. most people have heard of websockets that allows you to easily setup a two way communication channel with a server, but one of the specifications that hasn't been getting much coverage is the webrtc specificiation.
with the webrtc specification it will become easier to create pure html/javascript real-time video/audio related applications where you can access a user's microphone or webcam and share this data with other peers on the internet. for instance you can create video conferencing software that doesn't require a plugin, create a baby monitor using your mobile phone or more easily facilitate webcasts. all using cross-browser features without the use of a single plugin.
as with a lot of html5 related specification, the webrtc one isn't quite finished yet, and support amongst browsers is minimal. however, you can still do very cool things with the support that is currently available in the development builds of opera and the latest chrome builds. in this article i'll show you how to do use webrtc and a couple of other html5 standards to accomplish the following:
for this we need to take the following steps:
- access the user's webcam through the getusermedia feature
- send the webcam data using websockets to a server
- at the server, we analyze the received data, using javacv/opencv to detect and mark any face that is recognized
- use websockets to send the data back from the server to the client
- show the received information from the server to the client
in other words, we're going to create a real-time face detection system, where the frontend is completely provided by 'standard' html5/javascript functionality. as you'll see in this article, we'll have to use a couple of workarounds, because some features haven't been implemented yet.
which tools and technologies do we use
lets start by looking at the tools and technologies that we'll use to create our html5 face detection system. we'll start with the frontend technologies.
- webrtc: the specification page says this. these apis should enable building applications that can be run inside a browser, requiring no extra downloads or plugins, that allow communication between parties using audio, video and supplementary real-time communication, without having to use intervening servers (unless needed for firewall traversal, or for providing intermediary services).
- websockets : again, from the spec. to enable web applications to maintain bidirectional communications with server-side processes, this specification introduces the websocket interface.
- canvas : and also from the spec: element provides scripts with a resolution-dependent bitmap canvas, which can be used for rendering graphs, game graphics, or other visual images on the fly.
what do we use at the backend:
- jetty : provides us with a great websockets implementation
- opencv : library that has all kind of algorithms for image manipulation. we use their support for face recognition.
- javacv : we want to use opencv directly from jetty to detect images based on the data we receive. with javacv we can use the features of opencv through a java wrapper.
frontend step 1: enable mediastream in chrome and access the webcam
let's start with accessing the webcam. in my example i've used the latest version of chrome (canary) that has support for this part of the webrtc specifcation. before you can use it, you first have to enable it. you can do this by opening the "chrome://flags/" url and enable the mediastream feature:
once you've enabled it (and have restarted the browser), you can use some of the features of webrtc to access the webcam directly from the browser without having to use a plugin. all you need to do to access the webcam is use the following piece of html and javascript:
<div> <video id="live" width="320" height="240" autoplay></video> </div>
and the following javascript:
video = document.getelementbyid("live") var ctx; // use the chrome specific getusermedia function navigator.webkitgetusermedia("video", function(stream) { video.src = webkiturl.createobjecturl(stream); }, function(err) { console.log("unable to get video stream!") } )
with this small piece of html and javascript we can access the user's webcam and show the stream in the html5 video element. we do this by first requesting access to the webcam by using the getusermedia function (prefixed with the chrome specific webkit prefix). in the callback we pass in, we get access to a stream object. this stream object is the stream from the user's webcam. to show this stream we need to attach it to the video element. the src attribute of the video element allows us to specify an url to play. with another new html5 feature we can convert the stream to an url. this is done by using the url.createobjecturl function (once again prefixed). the result of this function is an url which we attach to the video element. and that's all it takes to get access to the stream of a user's webcam:
the next thing we want to do is send this stream, using websockets, to the jetty server.
frontend step 2: send stream to jetty server over websockets
in this step we want to take the data from the stream, and send it as binary data over a websocket to the listening jetty server. in theory this sounds simple. we've got a binary stream of video information, so we should be able to just access the bytes and instead of streaming the data to the video element, we instead stream it over a websocket to our remote server. in practice though, this doesn't work. the stream object you receive from the getusermedia function call, doesn't have an option to access it data as a stream. or better said, not yet. if you look at the specifications you should be able to call record() to get access to a recorder. this recorder can then be used to access the raw data. unfortunately, this functionality isn't supported yet in any browser. so we need to find an alternative. for this we basically just have one option:
- take a snapshot of the current video.
- paint this to the canvas element.
- grab the data from the canvas as an image.
- send the imagedata over websockets.
a bit of a workaround that causes a lot of extra processing on the client side and results in a much higher amount of data being sent to the server, but it works. implementing this isn't that hard:
<div> <video id="live" width="320" height="240" autoplay style="display: inline;"></video> <canvas width="320" id="canvas" height="240" style="display: inline;"></canvas> </div> <script type="text/javascript"> var video = $("#live").get()[0]; var canvas = $("#canvas"); var ctx = canvas.get()[0].getcontext('2d'); navigator.webkitgetusermedia("video", function(stream) { video.src = webkiturl.createobjecturl(stream); }, function(err) { console.log("unable to get video stream!") } ) timer = setinterval( function () { ctx.drawimage(video, 0, 0, 320, 240); }, 250); </script>
not that much more complex then our previous piece of code. what we added was a timer and a canvas on which we can draw. this timer is run every 250ms and draws the current video image to the canvas (as you can see in the following screenshot):
as you can see the canvas has a bit of a delay. you can tune this by
setting the interval lower, but this does require a lot more resources.
the next step is to grab the image from the canvas, convert it to binary, and send it over a websocket. before we look at the websocket part, let's first look at the data part. to get the data we extend the timer function with the following piece of code:
timer = setinterval( function () { ctx.drawimage(video, 0, 0, 320, 240); var data = canvas.get()[0].todataurl('image/jpeg', 1.0); newblob = datauritoblob(data); }, 250); }
the todataurl function copies the content from the current canvas and stores it in a dataurl. a dataurl is a string containing base64 encoded binary data. for our example it looks a bit like this:
data:image/jpeg;base64,/9j/4aaqskzjrgabaqaaaqabaad/2wbdaaebaqebaqebaqebaqeb aqebaqebaqebaqebaqebaqebaqebaqebaqebaqebaqebaqebaqebaqebaqebaqeb aqh/2wbdaqebaqebaqebaqebaqebaqebaqebaqebaqebaqebaqebaqebaqebaqeba qebaqebaqebaqebaqebaqebaqebaqh/waarcadwauadasiaahebaxeb/8qahwaaaqu .. snip .. qxl7fdbd+0sxjyz3ma5wosyxwmebviflrvmiufmwuo7o75q4osbys5l57xcojuvasttpyfj rsjkfxayklzkpzxc1zvxvpxlgrko1k8pplje6bs22oxsau4r9289jnjulirpql4p44fcqmkymjrs+z vhpnuzdbjlmjvaduwmlzsiy4otmbvaxudm+aqw2vsvzioyqqwg1j8hxt6velxd7l5caot5q4kj uc4rku4qjopi4tnxxkua01y8uijjtsts80le0z6wjjuz5pxaa//2q==
we could send this over as a text message and let the serverside decode it, but since websockets also allows us to send binary data, we'll convert this to binary. we need to do this in two steps, since canvas doesn't allow us (or i don't know how) direct access to the binary data. luckily someone at stackoverflow created a nice helper method for this (datauritoblob) that does exactly what we need (for more info and the code see this post ).at this point we've got a data array containing a screenshot of the current video that is taken at the specified interval. the next step, and for now final step, at the client is to send this using websockets.
using websockets from javascript is actually very easy. you just need to specify the websockets url and implement a couple of callback functions. the first thing we need to do is open the connection:
var ws = new websocket("ws://127.0.0.1:9999"); ws.onopen = function () { console.log("openened connection to websocket"); }
assuming everything went ok, we now have a two-way websockets connection. sending data over this connection is as easy as just calling ws.send:
timer = setinterval( function () { ctx.drawimage(video, 0, 0, 320, 240); var data = canvas.get()[0].todataurl('image/jpeg', 1.0); newblob = datauritoblob(data); ws.send(newblob); }, 250); }
that's it for the client side. if we open this page, we'll get/request access to the user webcam, show the stream from the webcam in a video element, capture the video at a specific interval, and send the data using websockets to the backend server for further processing.
setup backend environment
the backend for this example was created using jetty's websocket support (in a later article i'll see if i can also get it running using play 2.0 websockets support). with jetty it is really easy to launch a server with a websocket listener. i usually run jetty embedded, and to get websockets up and running i use the following simple jetty launcher.
public class websocketserver extends server { private final static logger log = logger.getlogger(websocketserver.class); public websocketserver(int port) { selectchannelconnector connector = new selectchannelconnector(); connector.setport(port); addconnector(connector); websockethandler wshandler = new websockethandler() { public websocket dowebsocketconnect(httpservletrequest request, string protocol) { return new facedetectwebsocket(); } }; sethandler(wshandler); } /** * simple innerclass that is used to handle websocket connections. * * @author jos */ private static class facedetectwebsocket implements websocket, websocket.onbinarymessage, websocket.ontextmessage { private connection connection; private facedetection facedetection = new facedetection(); public facedetectwebsocket() { super(); } /** * on open we set the connection locally, and enable * binary support */ public void onopen(connection connection) { this.connection = connection; this.connection.setmaxbinarymessagesize(1024 * 512); } /** * cleanup if needed. not used for this example */ public void onclose(int code, string message) {} /** * when we receive a binary message we assume it is an image. we then run this * image through our face detection algorithm and send back the response. */ public void onmessage(byte[] data, int offset, int length) { bytearrayoutputstream bout = new bytearrayoutputstream(); bout.write(data, offset, length); try { byte[] result = facedetection.convert(bout.tobytearray()); this.connection.sendmessage(result, 0, result.length); } catch (ioexception e) { log.error("error in facedetection, ignoring message:" + e.getmessage()); } } } /** * start the server on port 999 */ public static void main(string[] args) throws exception { websocketserver server = new websocketserver(9999); server.start(); server.join(); } }
a big source file, but not so hard to understand. the import parts are
creating a handler that supports the websocket protocol. in this listing
we create a websockethandler that always returns the same websocket. in
a real world scenario you'd determine the type of websocket based on
properties or url, in this example we just always return this same one.
the websocket itself isn't that complex either, but we do need to
configure a couple of things for everything to work correctly. in the
onopen method we do the following:
public void onopen(connection connection) { this.connection = connection; this.connection.setmaxbinarymessagesize(1024 * 512); }
this enables support for binary message. our websocket can now receive binary messages up to 512kb, since we don't directly stream the data, but send a canvas rendered image the message size is rather large. 512kb however is more then enough for messages sized 640x480. our face detection also works great with a resolution of just 320x240, so this should be enough. the processing of the received binary image is done in the onmessage method:
public void onmessage(byte[] data, int offset, int length) { bytearrayoutputstream bout = new bytearrayoutputstream(); bout.write(data, offset, length); try { byte[] result = facedetection.convert(bout.tobytearray()); this.connection.sendmessage(result, 0, result.length); } catch (ioexception e) { log.error("error in facedetection, ignoring message:" + e.getmessage()); } }
this isn't really optimized code, but its intentions should be clear. we get the data sent from the client, write it to a bytearray with fixed size and pass it onto the facedetection class. this facedetection class does its magic and returns the processed image. this processed image is the same as the original one, but now with a yellow rectangle indicating the found face.
this processed image is sent back over the same websocket connection to be processed by the html client. before we look at how we can show this data using javascript, we'll have a quick look at the facedetection class.
the facedetection class uses a cvhaarclassifiercascade from javacv, java wrappers for opencv, to detect a face. i won't go into too much detail how face detection works, since that is a very extensive subject in it self.
public class facedetection { private static final string cascade_file = "resources/haarcascade_frontalface_alt.xml"; private int minsize = 20; private int group = 0; private double scale = 1.1; /** * based on facedetection example from javacv. */ public byte[] convert(byte[] imagedata) throws ioexception { // create image from supplied bytearray iplimage originalimage = cvdecodeimage(cvmat(1, imagedata.length,cv_8uc1, new bytepointer(imagedata))); // convert to grayscale for recognition iplimage grayimage = iplimage.create(originalimage.width(), originalimage.height(), ipl_depth_8u, 1); cvcvtcolor(originalimage, grayimage, cv_bgr2gray); // storage is needed to store information during detection cvmemstorage storage = cvmemstorage.create(); // configuration to use in analysis cvhaarclassifiercascade cascade = new cvhaarclassifiercascade(cvload(cascade_file)); // we detect the faces. cvseq faces = cvhaardetectobjects(grayimage, cascade, storage, scale, group, minsize); // we iterate over the discovered faces and draw yellow rectangles around them. for (int i = 0; i < faces.total(); i++) { cvrect r = new cvrect(cvgetseqelem(faces, i)); cvrectangle(originalimage, cvpoint(r.x(), r.y()), cvpoint(r.x() + r.width(), r.y() + r.height()), cvscalar.yellow, 1, cv_aa, 0); } // convert the resulting image back to an array bytearrayoutputstream bout = new bytearrayoutputstream(); bufferedimage imgb = originalimage.getbufferedimage(); imageio.write(imgb, "png", bout); return bout.tobytearray(); } }
home » face detection using html5, javascript, webrtc, websockets, jetty and opencv
face detection using html5, javascript, webrtc, websockets, jetty and opencv
by jos.dirksen on thu, 04/19/2012 - 15:57through html5 and the corresponding standards, modern browsers get more standarized features with every release. most people have heard of websockets that allows you to easily setup a two way communication channel with a server, but one of the specifications that hasn't been getting much coverage is the webrtc specificiation.
with the webrtc specification it will become easier to create pure html/javascript real-time video/audio related applications where you can access a user's microphone or webcam and share this data with other peers on the internet. for instance you can create video conferencing software that doesn't require a plugin, create a baby monitor using your mobile phone or more easily facilitate webcasts. all using cross-browser features without the use of a single plugin.
as with a lot of html5 related specification, the webrtc one isn't quite finished yet, and support amongst browsers is minimal. however, you can still do very cool things with the support that is currently available in the development builds of opera and the latest chrome builds. in this article i'll show you how to do use webrtc and a couple of other html5 standards to accomplish the following:
for this we need to take the following steps:
- access the user's webcam through the getusermedia feature
- send the webcam data using websockets to a server
- at the server, we analyze the received data, using javacv/opencv to detect and mark any face that is recognized
- use websockets to send the data back from the server to the client
- show the received information from the server to the client
in other words, we're going to create a real-time face detection system, where the frontend is completely provided by 'standard' html5/javascript functionality. as you'll see in this article, we'll have to use a couple of workarounds, because some features haven't been implemented yet.
which tools and technologies do we use
lets start by looking at the tools and technologies that we'll use to create our html5 face detection system. we'll start with the frontend technologies.
- webrtc: the specification page says this. these apis should enable building applications that can be run inside a browser, requiring no extra downloads or plugins, that allow communication between parties using audio, video and supplementary real-time communication, without having to use intervening servers (unless needed for firewall traversal, or for providing intermediary services).
- websockets : again, from the spec. to enable web applications to maintain bidirectional communications with server-side processes, this specification introduces the websocket interface.
- canvas : and also from the spec: element provides scripts with a resolution-dependent bitmap canvas, which can be used for rendering graphs, game graphics, or other visual images on the fly.
what do we use at the backend:
- jetty : provides us with a great websockets implementation
- opencv : library that has all kind of algorithms for image manipulation. we use their support for face recognition.
- javacv : we want to use opencv directly from jetty to detect images based on the data we receive. with javacv we can use the features of opencv through a java wrapper.
frontend step 1: enable mediastream in chrome and access the webcam
let's start with accessing the webcam. in my example i've used the latest version of chrome (canary) that has support for this part of the webrtc specifcation. before you can use it, you first have to enable it. you can do this by opening the "chrome://flags/" url and enable the mediastream feature:
once you've enabled it (and have restarted the browser), you can use some of the features of webrtc to access the webcam directly from the browser without having to use a plugin. all you need to do to access the webcam is use the following piece of html and javascript:
<div> <video id="live" width="320" height="240" autoplay></video> </div>
and the following javascript:
video = document.getelementbyid("live") var ctx; // use the chrome specific getusermedia function navigator.webkitgetusermedia("video", function(stream) { video.src = webkiturl.createobjecturl(stream); }, function(err) { console.log("unable to get video stream!") } )
with this small piece of html and javascript we can access the user's webcam and show the stream in the html5 video element. we do this by first requesting access to the webcam by using the getusermedia function (prefixed with the chrome specific webkit prefix). in the callback we pass in, we get access to a stream object. this stream object is the stream from the user's webcam. to show this stream we need to attach it to the video element. the src attribute of the video element allows us to specify an url to play. with another new html5 feature we can convert the stream to an url. this is done by using the url.createobjecturl function (once again prefixed). the result of this function is an url which we attach to the video element. and that's all it takes to get access to the stream of a user's webcam:
the next thing we want to do is send this stream, using websockets, to the jetty server.
frontend step 2: send stream to jetty server over websockets
in this step we want to take the data from the stream, and send it as binary data over a websocket to the listening jetty server. in theory this sounds simple. we've got a binary stream of video information, so we should be able to just access the bytes and instead of streaming the data to the video element, we instead stream it over a websocket to our remote server. in practice though, this doesn't work. the stream object you receive from the getusermedia function call, doesn't have an option to access it data as a stream. or better said, not yet. if you look at the specifications you should be able to call record() to get access to a recorder. this recorder can then be used to access the raw data. unfortunately, this functionality isn't supported yet in any browser. so we need to find an alternative. for this we basically just have one option:
- take a snapshot of the current video.
- paint this to the canvas element.
- grab the data from the canvas as an image.
- send the imagedata over websockets.
a bit of a workaround that causes a lot of extra processing on the client side and results in a much higher amount of data being sent to the server, but it works. implementing this isn't that hard:
<div> <video id="live" width="320" height="240" autoplay style="display: inline;"></video> <canvas width="320" id="canvas" height="240" style="display: inline;"></canvas> </div> <script type="text/javascript"> var video = $("#live").get()[0]; var canvas = $("#canvas"); var ctx = canvas.get()[0].getcontext('2d'); navigator.webkitgetusermedia("video", function(stream) { video.src = webkiturl.createobjecturl(stream); }, function(err) { console.log("unable to get video stream!") } ) timer = setinterval( function () { ctx.drawimage(video, 0, 0, 320, 240); }, 250); </script>
not that much more complex then our previous piece of code. what we added was a timer and a canvas on which we can draw. this timer is run every 250ms and draws the current video image to the canvas (as you can see in the following screenshot):
as you can see the canvas has a bit of a delay. you can tune this by
setting the interval lower, but this does require a lot more resources.
the next step is to grab the image from the canvas, convert it to binary, and send it over a websocket. before we look at the websocket part, let's first look at the data part. to get the data we extend the timer function with the following piece of code:
timer = setinterval( function () { ctx.drawimage(video, 0, 0, 320, 240); var data = canvas.get()[0].todataurl('image/jpeg', 1.0); newblob = datauritoblob(data); }, 250); }
the todataurl function copies the content from the current canvas and stores it in a dataurl. a dataurl is a string containing base64 encoded binary data. for our example it looks a bit like this:
data:image/jpeg;base64,/9j/4aaqskzjrgabaqaaaqabaad/2wbdaaebaqebaqebaqebaqeb aqebaqebaqebaqebaqebaqebaqebaqebaqebaqebaqebaqebaqebaqebaqebaqeb aqh/2wbdaqebaqebaqebaqebaqebaqebaqebaqebaqebaqebaqebaqebaqebaqeba qebaqebaqebaqebaqebaqebaqebaqh/waarcadwauadasiaahebaxeb/8qahwaaaqu .. snip .. qxl7fdbd+0sxjyz3ma5wosyxwmebviflrvmiufmwuo7o75q4osbys5l57xcojuvasttpyfj rsjkfxayklzkpzxc1zvxvpxlgrko1k8pplje6bs22oxsau4r9289jnjulirpql4p44fcqmkymjrs+z vhpnuzdbjlmjvaduwmlzsiy4otmbvaxudm+aqw2vsvzioyqqwg1j8hxt6velxd7l5caot5q4kj uc4rku4qjopi4tnxxkua01y8uijjtsts80le0z6wjjuz5pxaa//2q==
we could send this over as a text message and let the serverside decode it, but since websockets also allows us to send binary data, we'll convert this to binary. we need to do this in two steps, since canvas doesn't allow us (or i don't know how) direct access to the binary data. luckily someone at stackoverflow created a nice helper method for this (datauritoblob) that does exactly what we need (for more info and the code see this post ).at this point we've got a data array containing a screenshot of the current video that is taken at the specified interval. the next step, and for now final step, at the client is to send this using websockets.
using websockets from javascript is actually very easy. you just need to specify the websockets url and implement a couple of callback functions. the first thing we need to do is open the connection:
var ws = new websocket("ws://127.0.0.1:9999"); ws.onopen = function () { console.log("openened connection to websocket"); }
assuming everything went ok, we now have a two-way websockets connection. sending data over this connection is as easy as just calling ws.send:
timer = setinterval( function () { ctx.drawimage(video, 0, 0, 320, 240); var data = canvas.get()[0].todataurl('image/jpeg', 1.0); newblob = datauritoblob(data); ws.send(newblob); }, 250); }
that's it for the client side. if we open this page, we'll get/request access to the user webcam, show the stream from the webcam in a video element, capture the video at a specific interval, and send the data using websockets to the backend server for further processing.
setup backend environment
the backend for this example was created using jetty's websocket support (in a later article i'll see if i can also get it running using play 2.0 websockets support). with jetty it is really easy to launch a server with a websocket listener. i usually run jetty embedded, and to get websockets up and running i use the following simple jetty launcher.
public class websocketserver extends server { private final static logger log = logger.getlogger(websocketserver.class); public websocketserver(int port) { selectchannelconnector connector = new selectchannelconnector(); connector.setport(port); addconnector(connector); websockethandler wshandler = new websockethandler() { public websocket dowebsocketconnect(httpservletrequest request, string protocol) { return new facedetectwebsocket(); } }; sethandler(wshandler); } /** * simple innerclass that is used to handle websocket connections. * * @author jos */ private static class facedetectwebsocket implements websocket, websocket.onbinarymessage, websocket.ontextmessage { private connection connection; private facedetection facedetection = new facedetection(); public facedetectwebsocket() { super(); } /** * on open we set the connection locally, and enable * binary support */ public void onopen(connection connection) { this.connection = connection; this.connection.setmaxbinarymessagesize(1024 * 512); } /** * cleanup if needed. not used for this example */ public void onclose(int code, string message) {} /** * when we receive a binary message we assume it is an image. we then run this * image through our face detection algorithm and send back the response. */ public void onmessage(byte[] data, int offset, int length) { bytearrayoutputstream bout = new bytearrayoutputstream(); bout.write(data, offset, length); try { byte[] result = facedetection.convert(bout.tobytearray()); this.connection.sendmessage(result, 0, result.length); } catch (ioexception e) { log.error("error in facedetection, ignoring message:" + e.getmessage()); } } } /** * start the server on port 999 */ public static void main(string[] args) throws exception { websocketserver server = new websocketserver(9999); server.start(); server.join(); } }
a big source file, but not so hard to understand. the import parts
are creating a handler that supports the websocket protocol. in this
listing we create a websockethandler that always returns the same
websocket. in a real world scenario you'd determine the type of
websocket based on properties or url, in this example we just always
return this same one.
the websocket itself isn't that complex either, but we do need to
configure a couple of things for everything to work correctly. in the
onopen method we do the following:
public void onopen(connection connection) { this.connection = connection; this.connection.setmaxbinarymessagesize(1024 * 512); }
this enables support for binary message. our websocket can now receive binary messages up to 512kb, since we don't directly stream the data, but send a canvas rendered image the message size is rather large. 512kb however is more then enough for messages sized 640x480. our face detection also works great with a resolution of just 320x240, so this should be enough. the processing of the received binary image is done in the onmessage method:
public void onmessage(byte[] data, int offset, int length) { bytearrayoutputstream bout = new bytearrayoutputstream(); bout.write(data, offset, length); try { byte[] result = facedetection.convert(bout.tobytearray()); this.connection.sendmessage(result, 0, result.length); } catch (ioexception e) { log.error("error in facedetection, ignoring message:" + e.getmessage()); } }
this isn't really optimized code, but its intentions should be clear. we get the data sent from the client, write it to a bytearray with fixed size and pass it onto the facedetection class. this facedetection class does its magic and returns the processed image. this processed image is the same as the original one, but now with a yellow rectangle indicating the found face.
this processed image is sent back over the same websocket connection to be processed by the html client. before we look at how we can show this data using javascript, we'll have a quick look at the facedetection class.
the facedetection class uses a cvhaarclassifiercascade from javacv, java wrappers for opencv, to detect a face. i won't go into too much detail how face detection works, since that is a very extensive subject in it self.
public class facedetection { private static final string cascade_file = "resources/haarcascade_frontalface_alt.xml"; private int minsize = 20; private int group = 0; private double scale = 1.1; /** * based on facedetection example from javacv. */ public byte[] convert(byte[] imagedata) throws ioexception { // create image from supplied bytearray iplimage originalimage = cvdecodeimage(cvmat(1, imagedata.length,cv_8uc1, new bytepointer(imagedata))); // convert to grayscale for recognition iplimage grayimage = iplimage.create(originalimage.width(), originalimage.height(), ipl_depth_8u, 1); cvcvtcolor(originalimage, grayimage, cv_bgr2gray); // storage is needed to store information during detection cvmemstorage storage = cvmemstorage.create(); // configuration to use in analysis cvhaarclassifiercascade cascade = new cvhaarclassifiercascade(cvload(cascade_file)); // we detect the faces. cvseq faces = cvhaardetectobjects(grayimage, cascade, storage, scale, group, minsize); // we iterate over the discovered faces and draw yellow rectangles around them. for (int i = 0; i < faces.total(); i++) { cvrect r = new cvrect(cvgetseqelem(faces, i)); cvrectangle(originalimage, cvpoint(r.x(), r.y()), cvpoint(r.x() + r.width(), r.y() + r.height()), cvscalar.yellow, 1, cv_aa, 0); } // convert the resulting image back to an array bytearrayoutputstream bout = new bytearrayoutputstream(); bufferedimage imgb = originalimage.getbufferedimage(); imageio.write(imgb, "png", bout); return bout.tobytearray(); } }
the code should at least explain the steps. for more info on how this really works you should look at the opencv and javacv websites. by changing the cascade file, and playing around with the minsize, group and scale properties you can also use this to detect eyes, nose, ears, pupils etc. for instance eye detection looks something like this:
frontend, display detected face
the final step is to receive the message send by jetty in our webapplication, and render it to an img element. we do this by setting the onmessage function on our websocket. in the following code, we receive the binary message. convert this data to an objecturl (see this as a local, temporary url), and set this value as the source of the image. once the image is loaded, we revoke the objecturl since it is no longer needed.
ws.onmessage = function (msg) { var target = document.getelementbyid("target"); url=window.webkiturl.createobjecturl(msg.data); target.onload = function() { window.webkiturl.revokeobjecturl(url); }; target.src = url; }
we now only need to update our html to the following:
<div style="visibility: hidden; width:0; height:0;"> <canvas width="320" id="canvas" height="240"></canvas> </div> <div> <video id="live" width="320" height="240" autoplay style="display: inline;"></video> <img id="target" style="display: inline;"/> </div>
and we've got working face recognition:
as you've seen we can do much with just the new html5 apis. it's too bad not all are finished and support over browsers is in some cases a bit lacking. but it does offer us nice and powerful features. i've tested this example on the latest version of chrome and on safari (for safari remove the webkit prefixes). it should however also work on the "usermedia" enabled mobile safari browser. make sure though that you're on a high bandwith wifi, since this code isn't optimized at all for bandwidth. i'll revisit this article in a couple of weeks, when i have time to make a play2/scala based version of the backend.
Published at DZone with permission of Jos Dirksen, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments