Capture camera frames while scanning

The Scandit Barcode Scanner gives you access to the camera frames after a frame has been processed. This can be useful if you want to further process the camera frame after a barcode was or was not recognized.

Receiving the frames

To receive the camera frames you have to first set the SBSBarcodePicker::processDelegate:

self.scanditBarcodePicker.processFrameDelegate = self;

And then implement the SBSProcessFrameDelegate's method which gives you access to the frame. The frame that you receive is the unchanged buffer reference that the scanner itself receives from the iOS camera API in the image format YCbCrBiPlanar. Be aware that the frame is not rotated with the phone but is always in a Landscape Right orientation just like it originally is captured by the camera.

interface YourViewController () <SBSProcessFrameDelegate>
...
- (void)barcodePicker:(SBSBarcodePicker*)barcodePicker
didProcessFrame:(CMSampleBufferRef)frame
session:(SBSScanSession*)session {
// Process the frame yourself.
}

Careful: The SBSProcessFrameDelegate method is invoked on a picker-internal queue. To perform any UI work, you must dispatch to the main UI thread.


Reading the YCbCrBiPlanar buffer information

If you want to read from the buffer you will need further information about it like the dimension of the image and the offsets and bytes per row of the y and CbCr components. The iOS SDK provides you with all the necessary functions to retrieve them:

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(frame);
// Lock the base address of the pixel buffer.
CVPixelBufferLockBaseAddress(imageBuffer,0);
// Get the pixel buffer width and height.
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Get the buffer info for the YCbCrBiPlanar format.
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
CVPlanarPixelBufferInfo_YCbCrBiPlanar *bufferInfo = (CVPlanarPixelBufferInfo_YCbCrBiPlanar *)baseAddress;
// Get the offsets and bytes per row.
int yOffset = CFSwapInt32BigToHost(bufferInfo->componentInfoY.offset);
int yRowBytes = CFSwapInt32BigToHost(bufferInfo->componentInfoY.rowBytes);
int cbCrOffset = CFSwapInt32BigToHost(bufferInfo->componentInfoCbCr.offset);
int cbCrRowBytes = CFSwapInt32BigToHost(bufferInfo->componentInfoCbCr.rowBytes);
// Cast the base address pointer to unsigned char and read from it using the offsets and bytes per row.
unsigned char *dataPtr = (unsigned char*)baseAddress;

You can read more about the image format on Wikipedia.

Important: Make sure that once you don't need the buffer anymore you unlock it.

CVPixelBufferUnlockBaseAddress(imageBuffer, 0);


Converting the frames to RGB, UIImage and NSData

If you need the RGB values of the frame you can convert the YCbCr image to RGB. For this you will start out by reading the buffer information as discussed in the previous section but before unlocking the imageBuffer you add the following code to convert to RGB:

// Allocate a byte array for the rgba values.
unsigned char *rgbaImage = (unsigned char*)malloc(4 * width * height);
// Loop over the width and height of the frame and convert to the Y, Cb and Cr components to RGB.
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
int ypIndex = yOffset + (x + y * yRowBytes);
int yp = (int) dataPtr[ypIndex];
unsigned char* cbCrPtr = dataPtr + cbCrOffset;
unsigned char* cbCrLinePtr = cbCrPtr + cbCrRowBytes * (y >> 1);
unsigned char cb = cbCrLinePtr[x & ~1];
unsigned char cr = cbCrLinePtr[x | 1];
// Conversion to RGB.
int r = yp + 1.402 * (cr - 128);
int g = yp - 0.34414 * (cb - 128) - 0.71414 * (cr - 128);
int b = yp + 1.772 * (cb - 128);
r = MIN(MAX(r, 0), 255);
g = MIN(MAX(g, 0), 255);
b = MIN(MAX(b, 0), 255);
rgbaImage[(x + y * width) * 4] = (unsigned char) b;
rgbaImage[(x + y * width) * 4 + 1] = (unsigned char) g;
rgbaImage[(x + y * width) * 4 + 2] = (unsigned char) r;
rgbaImage[(x + y * width) * 4 + 3] = (unsigned char) 255;
}
}

This will result in a RGBA byte array of the frame which you can then further process. You can also create a UIImage from this RGBA buffer:

// Create a device-dependent RGB color space.
static CGColorSpaceRef colorSpace = NULL;
if (colorSpace == NULL) {
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL) {
// Handle the error appropriately.
free(rgbaImage);
return nil;
}
}
// Create a Quartz direct-access data provider that uses data we supply.
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, rgbaImage, 4 * width * height, NULL);
// Create a bitmap image from data supplied by the data provider.
CGImageRef cgImage = CGImageCreate(width, height, 8, 32, width * 4,
colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little,
dataProvider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
// Create an image object to represent the Quartz image.
UIImage *image = [UIImage imageWithCGImage:cgImage];
// Release the CGImage and unlock the base address.
CGImageRelease(cgImage);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);

Of course you can always get the NSData for the PNG or JPEG representation of the frame as well.

NSData *data = UIImagePNGRepresentation(image);

Important: Make sure that you always free the RGBA byte array at the end to avoid a memory leak.

free(rgbaImage);