The Scandit Barcode Scanner gives you access to the camera frames after a frame has been processed. This can be useful if you want to further process the camera frame after a barcode was or was not recognized.
Receiving the frames
To receive the camera frames you have to first set the SBSBarcodePicker::processDelegate:
self.scanditBarcodePicker.processFrameDelegate = self;
And then implement the SBSProcessFrameDelegate's method which gives you access to the frame. The frame that you receive is the unchanged buffer reference that the scanner itself receives from the iOS camera API in the image format YCbCrBiPlanar. Be aware that the frame is not rotated with the phone but is always in a Landscape Right orientation just like it originally is captured by the camera.
...
- (void)barcodePicker:(SBSBarcodePicker*)barcodePicker
didProcessFrame:(CMSampleBufferRef)frame
session:(SBSScanSession*)session {
}
Careful: The SBSProcessFrameDelegate method is invoked on a picker-internal queue. To perform any UI work, you must dispatch to the main UI thread.
Reading the YCbCrBiPlanar buffer information
If you want to read from the buffer you will need further information about it like the dimension of the image and the offsets and bytes per row of the y and CbCr components. The iOS SDK provides you with all the necessary functions to retrieve them:
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(frame);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
CVPlanarPixelBufferInfo_YCbCrBiPlanar *bufferInfo = (CVPlanarPixelBufferInfo_YCbCrBiPlanar *)baseAddress;
int yOffset = CFSwapInt32BigToHost(bufferInfo->componentInfoY.offset);
int yRowBytes = CFSwapInt32BigToHost(bufferInfo->componentInfoY.rowBytes);
int cbCrOffset = CFSwapInt32BigToHost(bufferInfo->componentInfoCbCr.offset);
int cbCrRowBytes = CFSwapInt32BigToHost(bufferInfo->componentInfoCbCr.rowBytes);
unsigned char *dataPtr = (unsigned char*)baseAddress;
You can read more about the image format on Wikipedia.
Important: Make sure that once you don't need the buffer anymore you unlock it.
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
Converting the frames to RGB, UIImage and NSData
If you need the RGB values of the frame you can convert the YCbCr image to RGB. For this you will start out by reading the buffer information as discussed in the previous section but before unlocking the imageBuffer you add the following code to convert to RGB:
unsigned char *rgbaImage = (unsigned char*)malloc(4 * width * height);
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
int ypIndex = yOffset + (x + y * yRowBytes);
int yp = (int) dataPtr[ypIndex];
unsigned char* cbCrPtr = dataPtr + cbCrOffset;
unsigned char* cbCrLinePtr = cbCrPtr + cbCrRowBytes * (y >> 1);
unsigned char cb = cbCrLinePtr[x & ~1];
unsigned char cr = cbCrLinePtr[x | 1];
int r = yp + 1.402 * (cr - 128);
int g = yp - 0.34414 * (cb - 128) - 0.71414 * (cr - 128);
int b = yp + 1.772 * (cb - 128);
r = MIN(MAX(r, 0), 255);
g = MIN(MAX(g, 0), 255);
b = MIN(MAX(b, 0), 255);
rgbaImage[(x + y * width) * 4] = (unsigned char) b;
rgbaImage[(x + y * width) * 4 + 1] = (unsigned char) g;
rgbaImage[(x + y * width) * 4 + 2] = (unsigned char) r;
rgbaImage[(x + y * width) * 4 + 3] = (unsigned char) 255;
}
}
This will result in a RGBA byte array of the frame which you can then further process. You can also create a UIImage from this RGBA buffer:
static CGColorSpaceRef colorSpace = NULL;
if (colorSpace == NULL) {
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL) {
free(rgbaImage);
return nil;
}
}
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, rgbaImage, 4 * width * height, NULL);
CGImageRef cgImage = CGImageCreate(width, height, 8, 32, width * 4,
colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little,
dataProvider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
Of course you can always get the NSData for the PNG or JPEG representation of the frame as well.
NSData *data = UIImagePNGRepresentation(image);
Important: Make sure that you always free the RGBA byte array at the end to avoid a memory leak.