Search Unity

How ARCore enables you to create brand new types of user interaction (Part III)

May 1, 2018 in Technology | 8 min. read
Topics covered
Share

Is this article helpful for you?

Thank you for your feedback!

In part I and part II of our ARCore blog series, we shared how you can leverage ARCore for Unity features, like motion tracking to create a Jenga AR game or light estimation to trigger object behavior. In this post, we want to share some cool ARCore for Unity experiments that show what you can do with data generated from the camera feed.

A handheld device’s camera is not just for taking photos and videos

ARCore for Unity enhances the utility of the cameras by bringing contextual data to the user experience. To showcase some of the things you can do, we asked some of our top engineers to create some AR experiments, including a breakdown of their techniques and code snippets, so you can explore them on your own. Here are just a few things you can start testing today!

World Captures

By Dan Miller

Contextual applications of AR, that is those that live and interact with the real world, are perhaps one of the most mainstream use cases. With World Captures, you can use the camera feed to capture and record a moment in time and space in order to share it in its context. Inspired by Zach Lieberman, World Captures spawns a quad in space, which uses a screenshot from the camera feed as texture.

To transform the camera feed to a quad texture, I used the CaptureScreenshotAsTexture API.  Once the screenshot is captured, I can easily add it as a texture to a material on a quad that is spawned at the same time the user taps the screen. Notice that you need to wait until the end of the frame in order to properly give the application time to render the full screenshot to a texture.

The code snippet below will help you experiment with World Capture with ARCore for Unity.

    IEnumerator CaptureScreenshot()
    {
        yield return new WaitForEndOfFrame();
        PlaneTex = ScreenCapture.CaptureScreenshotAsTexture();
        yield return new WaitForEndOfFrame();
        GameObject newPlane = Instantiate(plane, spawnPoint.position, Camera.main.transform.rotation);
        newPlane.GetComponent<MeshRenderer>().material.mainTexture = PlaneTex;
        PlaneTex.Apply();
    }

AR Camera Lighting

by John Sietsma

Use the camera feed to provide lighting and reflections to virtual objects.

It’s difficult to create the illusion that virtual objects blend with the real world as if they actually exist. A key component in creating this illusion is influencing the behavior of 3D digital objects, using the real light and reflections around them.

AR Camera Lighting allows you to do that by creating a skybox based on the camera feed. You can then use the skybox in your Unity scene to add lighting to virtual objects, and use a reflection probe to create reflections from the skybox.

Because the image captured from your camera view won’t be enough to cover a sphere, the lighting and reflections won’t be fully accurate. Still the illusion it creates is very compelling, in particular for cases in which the user is moving the camera and the model itself is moving.

To create the sphere, I transform the camera feed to a RenderTexture and access the texture in ARCore using a GLSL shader. You can find more thorough instructions and access all assets used in AR Camera Lighting here.

Feature Point Colors:

by Amy DiGiovanni

Use the camera feed to place pixelated cubes at visible feature points. Each cube is colored based on the pixelation of each feature point.

Feature Point Colors showcases how you can add depth and shape, and call out distinct elements in real world objects presented in your camera view by using visual cues.

I use GoogleARCore’s TextureReader component to get the raw camera texture from the GPU, and then make a friendlier representation of the pixels from this texture. The cubes are all spawned up-front, based on a pool size, for performance, and they are activated and deactivated as needed.

void OnImageAvailable(TextureReaderApi.ImageFormatType format, int width, int height, IntPtr pixelBuffer, int bufferSize)
    {
        if (format != TextureReaderApi.ImageFormatType.ImageFormatColor)
            return;

        // Adjust buffer size if necessary.
        if (bufferSize != m_PixelBufferSize || m_PixelByteBuffer.Length == 0)
        {
            m_PixelBufferSize = bufferSize;
            m_PixelByteBuffer = new byte[bufferSize];
            m_PixelColors = new Color[width * height];
        }

        // Move raw data into managed buffer.
        System.Runtime.InteropServices.Marshal.Copy(pixelBuffer, m_PixelByteBuffer, 0, bufferSize);

        // Interpret pixel buffer differently depending on which orientation the device is.
        // We need to get pixel colors into a friendly format - an array
        // laid out row by row from bottom to top, and left to right within each row.
        var bufferIndex = 0;
        for (var y = 0; y < height; ++y)
        {
            for (var x = 0; x < width; ++x)
            {
                int r = m_PixelByteBuffer[bufferIndex++];
                int g = m_PixelByteBuffer[bufferIndex++];
                int b = m_PixelByteBuffer[bufferIndex++];
                int a = m_PixelByteBuffer[bufferIndex++];
                var color = new Color(r / 255f, g / 255f, b / 255f, a / 255f);
                int pixelIndex;
                switch (Screen.orientation)
                {
                    case ScreenOrientation.LandscapeRight:
                        pixelIndex = y * width + width - 1 - x;
                        break;
                    case ScreenOrientation.Portrait:
                        pixelIndex = (width - 1 - x) * height + height - 1 - y;
                        break;
                    case ScreenOrientation.LandscapeLeft:
                        pixelIndex = (height - 1 - y) * width + x;
                        break;
                    default:
                        pixelIndex = x * height + y;
                        break;
                }
                m_PixelColors[pixelIndex] = color;
            }
        }

        FeaturePointCubes();
    }

Once I have a friendly representation of the pixel colors, I go through all points in the ARCore point cloud (until the pool size is reached), and then I position cubes at any points that are visible in screen space. Each cube is colored based on the pixel at its feature point’s screen space position.

void FeaturePointCubes()
    {
        foreach (var pixelObj in m_PixelObjects)
        {
            pixelObj.SetActive(false);
        }

        var index = 0;
        var pointsInViewCount = 0;
        var camera = Camera.main;
        var scaledScreenWidth = Screen.width / k_DimensionsInverseScale;
        while (index < Frame.PointCloud.PointCount && pointsInViewCount < poolSize)
        {
            // If a feature point is visible, use its screen space position to get the correct color for its cube
            // from our friendly-formatted array of pixel colors.
            var point = Frame.PointCloud.GetPoint(index);
            var screenPoint = camera.WorldToScreenPoint(point);
            if (screenPoint.x >= 0 && screenPoint.x < camera.pixelWidth &&
                screenPoint.y >= 0 && screenPoint.y < camera.pixelHeight)
            {
                var pixelObj = m_PixelObjects[pointsInViewCount];
                pixelObj.SetActive(true);
                pixelObj.transform.position = point;
                var scaledX = (int)screenPoint.x / k_DimensionsInverseScale;
                var scaledY = (int)screenPoint.y / k_DimensionsInverseScale;
                m_PixelMaterials[pointsInViewCount].color = m_PixelColors[scaledY * scaledScreenWidth + scaledX];
                pointsInViewCount++;
            }
            index++;
        }
    }

Full code for the FeaturePointColors component can be found here.

Sobel Spaces

By Stella Cannefax

Use the camera feed to draw shapes from one side of the screen to the other creating interesting spatial effects.

Sobel Spaces is an example of how you can use the camera feed to reveal new layers of information from the real world. Emphasizing edges or creating visually compelling filters that alter the viewport are just two examples of what you can do.

The experiment is based on the Sobel operator, a common method of detecting edges from the camera feed in order to produce an image with the edges emphasized. Sobel Spaces is a modified version of the ComputerVision sample from the ARCore SDK. All that’s really changed is how the Sobel filter works:

{
   var halfWidth = width / 2;
   // Adjust buffer size if necessary.
   int bufferSize = width * height;
   if (bufferSize != s_ImageBufferSize || s_ImageBuffer.Length == 0)
   {
       s_ImageBufferSize = bufferSize;
       s_ImageBuffer = new byte[bufferSize];
   }

   // Move raw data into managed buffer.
   System.Runtime.InteropServices.Marshal.Copy(inputImage, s_ImageBuffer, 0, bufferSize);

   // Detect edges.  Instead of iterating over every pixel,
   // we do every other one.
   for (int j = 1; j < height - 1; j += 2)
   {
       for (int i = 1; i < width - 1; i += 2)
       {
           // Offset of the pixel at [i, j] of the input image.
           int offset = (j * width) + i;
           byte pixel = s_ImageBuffer[offset];

           // the normal sobel filter here would do
           // offset - width , the halfWidth is part of how we get the offset effect
           int a00 = s_ImageBuffer[offset - halfWidth - 1];
           int a01 = s_ImageBuffer[offset - halfWidth];
           int a02 = s_ImageBuffer[offset - halfWidth + 1];
           int a10 = s_ImageBuffer[offset - 1];
           int a12 = s_ImageBuffer[offset + 1];
           int a20 = s_ImageBuffer[offset + halfWidth - 1];
           int a21 = s_ImageBuffer[offset + halfWidth];
           int a22 = s_ImageBuffer[offset + halfWidth + 1];

           int xSum = -a00 - (2 * a10) - a20 + a02 + (2 * a12) + a22;
           int ySum = a00 + (2 * a01) + a02 - a20 - (2 * a21) - a22;

	     // instead of summing the X & Y sums like a normal sobel,
            // here we consider them separately, which enables a tricolor look
           if ((xSum * xSum) > 128)
           {
               outputImage[offset] = 0x2F;
           }
           else if((ySum * ySum) > 128)
           {
               outputImage[offset] = 0xDF;
           }
           else
           {
		  // the noise is just for looks - achieves a more unstable feel
               byte yPerlinByte = (byte)Mathf.PerlinNoise(j, 0f);
               byte color = (byte)(pixel | yPerlinByte);
               outputImage[offset] = color;
           }
       }
   }

ARCore resources and how to share your ideas

With significant AR utility introduced to handheld cameras, AR will continue to become mainstream practice for consumers, simply because the camera is one of the most-used features in mobile devices. We’d love to learn how you would leverage the camera feed to create engaging AR experiences on Android!

Share your ideas with the community and use ARCore 1.1.0 for Unity to create high-quality AR apps for more than 100 million Android devices on Google Play! Here’s how:

  1. Set up ARCore 1.0 for Unity.
  2. Join the Unity Connect Handheld AR channel for an opportunity to meet, chat, and learn from other community creators working on AR apps.
  3. Share a short use-case video or a gif with a description on the channel.
  4. Unity will be actively engaging in the channel and watching for the most creative ideas!

Learn more about ARCore for Unity at Google I/O

May 1, 2018 in Technology | 8 min. read

Is this article helpful for you?

Thank you for your feedback!

Topics covered