ARCore With Google: Building an Augmented Images Application
Learn more about how you can create your own augmented images with Google's ARCore.
Join the DZone community and get the full member experience.
Join For Freein this tutorial, you’ll learn how to place 3d models in the real world by setting the anchor as a specific scene instead of a regular plane. arcore by google lets you augment 2d images, which can be recognized by arcore to then place 3d models over them.
you then provide some reference images, and arcore tracking determines where those images are physically located in the environment. augmented images are already in wide usage such as books, newspapers, magazines, etc.
but before you dive deeper into this tutorial, you must check out the following articles as a prerequisite to this one:
once you’re done with these two, you should have a basic understanding of the terminology in arcore and sceneform, such as scene, anchor, node, transformablenode, etc.
what are augmented images?
according to developer docs , augmented images in arcore let you build ar apps that can respond to 2d images, such as posters or product packaging, in the user's environment. you provide a set of reference images, and arcore tracking tells you where those images are physically located in an ar session, once they are detected in the camera view.
basically, using augmented images, you can turn a simple 2d image into an augmented image, which can be recognized by your app and be then used to place a 3d model above it.
when you might want to use augmented images?
here are some restrictions that you might want to consider before using augmented images:
- your use case must not involve scanning more than 20 images simultaneously since arcore can only track up to 20 images at once.
- size of the physical counterpart in the real world must be more than 15cm by 15cm and flat.
- you don’t want to track moving objects. arcore cannot track moving images, although it can start tracking once the image stops.
- arcore uses feature points in the reference image and can store feature point information for up to 1000 images.
choosing a good reference image
here are some tips to choose a good reference image to improve detectability by arcore:
- augmented images support png, jpeg, and jpg formats.
- detection is based on points of high contrast, and so, both color and black/white images are detected, regardless of whether a color or black/white reference image is used.
- image’s resolution must be at least 300 x 300 pixels.
- using high-res images does not mean improved performance.
- images with repetitive features, such as patterns and polka dots, must be avoided.
-
use the
arcoreimg
tool to evaluate how good your reference image is. a score of at least 75 is recommended.
how to use the
arcoreimg
tool:
- download the arcore sdk for android from this link:
- extract the zip contents of the zip file anywhere you like.
- navigate to the extracted folder and go to tools —> arcoreimg —> windows (linux/macos whatever you use)
- open a command prompt at this location.
- now enter this command:
arcoreimg.exe eval-img --input_image_path=dog.png
replace dog.png with the complete path to your image.
getting started with augmented images application
now that you’ve familiarized yourself with arcore and sceneform and have selected a good reference image with a score of 75+ , it’s time to start coding the application!!
create a custom fragment
we will be creating a custom fragment to add to our activity. we need a custom fragment as we will be altering some properties of the default fragment.
create a class named
customarfragment
and extend it from
arfragment
. here is the code for
customarfragment
:
package com.ayusch.augmentedimages;
import android.util.log;
import com.google.ar.core.config;
import com.google.ar.core.session;
import com.google.ar.sceneform.ux.arfragment;
public class customarfragment extends arfragment {
@override
protected config getsessionconfiguration(session session) {
getplanediscoverycontroller().setinstructionview(null);
config config = new config(session);
config.setupdatemode(config.updatemode.latest_camera_image);
session.configure(config);
getarsceneview().setupsession(session);
return config;
}
}
first of all, we are setting the plane discovery instruction to
null
. by doing this, we turn off the hand icon that appears just after the fragment is initialized, which instructs the user to move their phone around. we don’t need it anymore as we are not detecting random planes but a specific image.
next, we set the update mode for the session to latest_camera_image. this ensures that your update listener is called whenever the camera frame updates. it configures the behavior of the update method.
setting up the augmented images database
add your chosen reference image (which you want to detect in the physical world) in the assets folder. if your assets folder doesn’t exist, create one. now, we will be adding augmented images to our database, which will then be detected in the real world.
we’ll set up this database as soon as the fragment (scene) is created. then, we check for the success and failure of this call and set the log accordingly. add the following code to your custom fragment:
if ((((mainactivity) getactivity()).setupaugmentedimagesdb(config, session))) {
log.d("setupaugimgdb", "success");
} else {
log.e("setupaugimgdb","faliure setting up db");
}
this is what the
customarfragment
would look like:
package com.ayusch.augmentedimages;
import android.util.log;
import com.google.ar.core.config;
import com.google.ar.core.session;
import com.google.ar.sceneform.ux.arfragment;
public class customarfragment extends arfragment {
@override
protected config getsessionconfiguration(session session) {
getplanediscoverycontroller().setinstructionview(null);
config config = new config(session);
config.setupdatemode(config.updatemode.latest_camera_image);
session.configure(config);
getarsceneview().setupsession(session);
if ((((mainactivity) getactivity()).setupaugmentedimagesdb(config, session))) {
log.d("setupaugimgdb", "success");
} else {
log.e("setupaugimgdb","faliure setting up db");
}
return config;
}
}
we will soon be creating the
setupaugmentedimagesdb
method in the
mainactivity
. now, with the
customarfragment
created, let's add it to our activity_main.xml; here’s the code for your activity_main.xml:
<?xml version="1.0" encoding="utf-8"?>
<android.support.constraint.constraintlayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".mainactivity">
<fragment
android:id="@+id/sceneform_fragment"
android:name="com.ayusch.augmentedimages.customarfragment"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</android.support.constraint.constraintlayout>
notice that we set the name of this fragment to our
customarfragment
. this is necessary to ensure that the added fragment is our custom fragment. this will ensure that permission handling and session initializations are taken care of.
adding an image to the augmented images database
here, we will set up our images database, find the reference image in the real world and then add a 3d model accordingly.
let’s start by setting up our database. create a public function
setupaugmentedimagesdb
in the
mainactivity.java
class:
public boolean setupaugmentedimagesdb(config config, session session) {
augmentedimagedatabase augmentedimagedatabase;
bitmap bitmap = loadaugmentedimage();
if (bitmap == null) {
return false;
}
augmentedimagedatabase = new augmentedimagedatabase(session);
augmentedimagedatabase.addimage("tiger", bitmap);
config.setaugmentedimagedatabase(augmentedimagedatabase);
return true;
}
private bitmap loadaugmentedimage() {
try (inputstream is = getassets().open("blanket.jpeg")) {
return bitmapfactory.decodestream(is);
} catch (ioexception e) {
log.e("imageload", "io exception", e);
}
return null;
}
we also have the
loadaugmentedimage
method, which loads the image from the assets folder and returns a bitmap.
in
setupaugmentedimagesdb
, we first initialize our database for this session and then add an image to this database. we will name our image “tiger.” then, we set the database for this session configuration and return true, indicating that the image is added successfully.
detecting the reference images in the real world
now, we will start detecting our reference images in the real world. in order to do so, we will add a listener to our scene, which will be called every time a frame is created, and that frame will be analyzed to find our reference image.
add this line in the
oncreate
method of
mainactivity.java
:
arfragment.getarsceneview().getscene().addonupdatelistener(this::onupdateframe);
now, add the
onupdateframe
method to
mainactivity
:
@requiresapi(api = build.version_codes.n)
private void onupdateframe(frametime frametime) {
frame frame = arfragment.getarsceneview().getarframe();
collection<augmentedimage> augmentedimages = frame.getupdatedtrackables(augmentedimage.class);
for (augmentedimage augmentedimage : augmentedimages) {
if (augmentedimage.gettrackingstate() == trackingstate.tracking) {
if (augmentedimage.getname().equals("tiger") && shouldaddmodel) {
placeobject(arfragment, augmentedimage.createanchor(augmentedimage.getcenterpose()), uri.parse("mesh_bengaltiger.sfb"));
shouldaddmodel = false;
}
}
}
}
in the first line, we get the frame from the scene. a frame can be imagined as a snapshot in the middle of a video. if you are familiar with how video works, you might be knowing that they are a series of still pictures flipped one after the other really fast, giving the impression of the motion picture. we are extracting one of those pictures.
once we have the frame, we analyze for our reference image. we extract a list of all the items arcore has tracked using
frame.getupdatedtrackables
.this is a collection of all the detected images. we then loop over the collection and check if our image “tiger” is present in the frame.
if we find a match, then we go ahead and place a 3d model over the detected image.
note: i have added
shouldaddmodel
to ensure that we add the model only once.
placing a 3d model over the reference image
now that we have detected our image in the real world, we can start adding 3d models over it. we will copy the
placeobject
and
addnodetoscene
methods from our
previous project
and add them here.
although i have previously explained what these methods do line by line, here is an overview:
-
placeobject
: this method is used to build a renderable from the provided uri. once the renderable is built, it is passed intoaddnodetoscene
method where the renderable is attached to a node and that node is placed onto the scene. -
addnodetoscene
: this method creates ananchornode
from the received anchor, creates another node on which the renderable is attached, and then adds this node to theanchornode
and adds theanchornode
to the scene.
here is our final
mainactivity.java
class:
package com.ayusch.augmentedimages;
import android.graphics.bitmap;
import android.graphics.bitmapfactory;
import android.net.uri;
import android.os.build;
import android.support.annotation.requiresapi;
import android.support.v7.app.appcompatactivity;
import android.os.bundle;
import android.util.log;
import android.widget.toast;
import com.google.ar.core.anchor;
import com.google.ar.core.augmentedimage;
import com.google.ar.core.augmentedimagedatabase;
import com.google.ar.core.config;
import com.google.ar.core.frame;
import com.google.ar.core.session;
import com.google.ar.core.trackingstate;
import com.google.ar.sceneform.anchornode;
import com.google.ar.sceneform.frametime;
import com.google.ar.sceneform.rendering.modelrenderable;
import com.google.ar.sceneform.rendering.renderable;
import com.google.ar.sceneform.ux.arfragment;
import com.google.ar.sceneform.ux.transformablenode;
import java.io.ioexception;
import java.io.inputstream;
import java.util.collection;
public class mainactivity extends appcompatactivity {
arfragment arfragment;
boolean shouldaddmodel = true;
@override
protected void oncreate(bundle savedinstancestate) {
super.oncreate(savedinstancestate);
setcontentview(r.layout.activity_main);
arfragment = (customarfragment) getsupportfragmentmanager().findfragmentbyid(r.id.sceneform_fragment);
arfragment.getplanediscoverycontroller().hide();
arfragment.getarsceneview().getscene().addonupdatelistener(this::onupdateframe);
}
@requiresapi(api = build.version_codes.n)
private void placeobject(arfragment arfragment, anchor anchor, uri uri) {
modelrenderable.builder()
.setsource(arfragment.getcontext(), uri)
.build()
.thenaccept(modelrenderable -> addnodetoscene(arfragment, anchor, modelrenderable))
.exceptionally(throwable -> {
toast.maketext(arfragment.getcontext(), "error:" + throwable.getmessage(), toast.length_long).show();
return null;
}
);
}
@requiresapi(api = build.version_codes.n)
private void onupdateframe(frametime frametime) {
frame frame = arfragment.getarsceneview().getarframe();
collection<augmentedimage> augmentedimages = frame.getupdatedtrackables(augmentedimage.class);
for (augmentedimage augmentedimage : augmentedimages) {
if (augmentedimage.gettrackingstate() == trackingstate.tracking) {
if (augmentedimage.getname().equals("tiger") && shouldaddmodel) {
placeobject(arfragment, augmentedimage.createanchor(augmentedimage.getcenterpose()), uri.parse("mesh_bengaltiger.sfb"));
shouldaddmodel = false;
}
}
}
}
public boolean setupaugmentedimagesdb(config config, session session) {
augmentedimagedatabase augmentedimagedatabase;
bitmap bitmap = loadaugmentedimage();
if (bitmap == null) {
return false;
}
augmentedimagedatabase = new augmentedimagedatabase(session);
augmentedimagedatabase.addimage("tiger", bitmap);
config.setaugmentedimagedatabase(augmentedimagedatabase);
return true;
}
private bitmap loadaugmentedimage() {
try (inputstream is = getassets().open("blanket.jpeg")) {
return bitmapfactory.decodestream(is);
} catch (ioexception e) {
log.e("imageload", "io exception", e);
}
return null;
}
private void addnodetoscene(arfragment arfragment, anchor anchor, renderable renderable) {
anchornode anchornode = new anchornode(anchor);
transformablenode node = new transformablenode(arfragment.gettransformationsystem());
node.setrenderable(renderable);
node.setparent(anchornode);
arfragment.getarsceneview().getscene().addchild(anchornode);
node.select();
}
}
now, run your app. you should see a screen as shown below. move around our phone a bit over the reference object. arcore will detect the feature points, and as soon as it detects the reference image in the real world, it will add your 3d model onto it. in the image below, i used my blanket as a reference:
with this, we have created our very first augmented images app using arcore by google and the sceneform sdk!!
like what you read? don’t forget to share this post with your friends and collegues!
Published at DZone with permission of Ayusch Jain. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments