Loading an image with ALAssetsLibrary

... and what Apple doesn't tell you (at least directly).

Access control

Usually you want to load an image from the user's photo library and just use it. Apple implemented various ways for you to get your hands on the image data in iOS 3 and 4 but the problem with that was privacy concerns, not about the image data itself but the metadata.

So in iOS 4 and 5 if you wanted to use ALAssetsLibrary the system popped up a message informing the user that the app wants to use location data. That confused many people because most of them were not aware that the images themselves contained geo-tags that allowed to track the user. So many apps defaulted to using the photo-picker supplied by Apple to avoid that message. It was a customer experience nightmare and prompted so many bad reviews:

"The App wants location access but it just should apply a filter to an image! 1 Star" <small>(some AppStore user)</small>

But why is that? Could Apple not just filter out the geo-tags when the user declined? No it could not because the API was implemented a bit brain dead on the first look, ALAssetRepresentation has a method getBytes:fromOffset:length:error: that gives the programmer of an app direct access to the underlying file for the asset with all embedded metadata and stuff. It is not so dumb as it looks at first to give the programmer access to the data directly, but we get to that later.

So in iOS 6 Apple introduced "Privacy settings" that includes a switch for photo access, one for location, one for the microphone and so on. So if you want to load an image through the assets library methods you'll have to ask for permission first. This will be done implicitly (on first use) but you'll have to check for denial manually by calling [ALAssetsLibrary authorizationStatus] before presenting a UI to the user that may fail to display content.

switch ([ALAssetsLibrary authorizationStatus]) {
    case ALAuthorizationStatusAuthorized:
        // User allowed access
        // fallthrough
    case ALAuthorizationStatusNotDetermined:
        // Ok cool, go ahead, use the assets library,
        // user will be prompted to allow access

        // show UI to use library
    case ALAuthorizationStatusRestricted:
        // Access is restricted by administrator of device and cannot be changed
        // Do not display UI to browse or use Assets library
    case ALAuthorizationStatusDenied:
        // User denied access, display UI to explain user how to enable access

In iOS 6 the users were still confused because most app-designers opted for asking the user on app startup to allow access to the assets library if they used it (to not have to check for access all the time in code). Sometimes the users did not directly recognize why the app needs access and denied, so the programmers had to include instructions how to re-enable access. All in all not the best UX one may wish for. In iOS 7 Apple included a key into the Info.plist file (NSPhotoLibraryUsageDescription) to modify the prompt that the app shows to the user to allow for a little more explanation what the app intends to to with the access rights given by the user. That's much better but still cumbersome.

Memory constraints

With the first iPad Apple introduced the camera connection kit (CCK) because the cloud solutions provided by Apple and others were not in the usable shape they are now. Apple wanted the users to use the iPad for photo viewing so why not allow direct import from digital cameras via USB or SD-Card?

Now you'll have to see, the iPhone 3gs and iPhone 4 had relatively low resolution cameras at that time (3 and 5 megapixels) so decompressed image sizes were usually between 12 and 20 megabytes, but you'll know what happens if the user connects a 17 megapixel camera? Image sizes blow up easily to around 70 megabytes of decompressed size. Be aware, those devices in that time only had about 256 Megabytes of RAM available (of which about 100 MB were used for the system and and about 64 MB went straight to the GPU) so we were in bad luck opening such an image with the default iOS 3 image picker that delivered the full size image decompressed as an UIImage instance. If you wanted to do anything other than just displaying an image on the screen you had to be very careful to not catch the dreaded memory warning.

So Apple invented the ALAssetsLibrary-API to help you out, with that you could enumerate saved assets, load default thumbnails, load aspect ratio thumbnails and fetch "full screen images" for most of the apps available those low resolution variants were more than enough. The API had its own fair share of bugs but it usually worked.

But what if you wanted something bigger? Now we get to ALAssetRepresentation's getBytes:fromOffset:length:error: function. With that you can, if you know your way around core graphics, load images of all sizes you desire without wasting RAM. But that is really butt ugly, why not implementing a loadImageOfSize: function into ALAssetRepresentation? Only Apple knows.

CGFloat maxSideLen = 1000.0;
ALAssetRepresentation *representation;

CGDataProviderDirectCallbacks callbacks = {
  .version = 0,
  .getBytePointer = NULL,
  .releaseBytePointer = NULL,
  .getBytesAtPosition = getAssetBytesCallback,
  .releaseInfo = releaseAssetCallback,

CGDataProviderRef provider = CGDataProviderCreateDirect((void *)CFBridgingRetain(representation), [representation size], &callbacks);
CGImageSourceRef source = CGImageSourceCreateWithDataProvider(provider, NULL);

NSDictionary *options = @{(NSString *)kCGImageSourceCreateThumbnailFromImageAlways : @YES,
                          (NSString *)kCGImageSourceThumbnailMaxPixelSize : @(maxSideLen),
CGImageRef img = CGImageSourceCreateThumbnailAtIndex(source, 0, (__bridge CFDictionaryRef)options);

// Do something with img


To get that working we need the following C based functions somewhere outside of the implementation of the current class (in the same file):

static size_t getAssetBytesCallback(void *info, void *buffer, off_t position, size_t count) {
    ALAssetRepresentation *representation = (__bridge id)info;

    NSError *error = nil;
    size_t bytes = [representation getBytes:(uint8_t *)buffer fromOffset:position length:count error:&error];

    if (bytes == 0 && error) {
        NSLog(@"Error while reading asset: %@", error);

    return bytes;

static void releaseAssetCallback(void *info) {


If you try the above code or just use ALAssetRepresentation's fullSizeImage (Warning: don't do that! You'll crash your app because of that 100 megapixel image that the user saved on his iPad and didn't tell you. Your App will crash if you try to display a 400 MB image, trust me, I tried!) you'll see that the image is not neccessarily rotated the way you might assume. This is because of EXIF-Rotation, that's a meta data tag appended to images that were captured in an orientation that is not native to the camera sensor. Modern digital cameras read their camera sensor one specific way, if the user rotates his camera an accelleration sensor picks that up and the logic writes an orientation tag to the image. So the physical image on disk is saved in native, to the camera sensor, orientation and a hint is saved that the image viewer has to rotate the image before display. You could fetch that orientation information from the ALAssetRepresentation and rotate the image by yourself (or just initialize an UIImage with its imageWithCGImage:scale:orientation: initializer and tell it the orientation) or you could let the system handle that for you.

If you use Core Graphics to load the image (as I am suggesting you do!) this is relatively easy, just add one entry to the options dictionary: (NSString *)kCGImageSourceCreateThumbnailWithTransform : @YES But be aware that you'll effectively reset the orientation to ALAssetOrientationUp and have to ignore the orientation-property of the ALAssetRepresentation

iOS 6 Photos-App Effects and Cropping

In iOS 7 Apple introduced camera effects (like Instagram) and in iOS 6 it allowed the users to crop and rotate images in the Photos app. As you might imagine this complicates image loading further.

If you look to Core Image closely you'll see the following function pop out: CIFilter filterArrayFromSerializedXMP:inputImageExtent:error: So this will return an Array of CIFilter instances to apply all filters that the user used on the image to the image by yourself. Why do we need that? Because if you fetch the fullSizeImage or get to the image data by using getBytes:fromOffset:length:error: you'll always get the original image data, unmodified by any filter.

Ok you might think: "This is easy, I'll load the image from the assets library by using core graphics to avoid beeing bombed out by a memory warning and then apply that filter chain to the image an be done."

But not so fast! Remember that I said the user is able to crop the image? Let's see how that cropping is implemented in the filter chain:

(lldb) po filterArray
    "<CIAffineTransform: inputImage=nil inputTransform=CGAffineTransform: {{1, 0, 0, 1}, {-108, -965}}>",
    "<CICrop: inputImage=nil inputRectangle=[0 0 1541 973]>"

What you can see here is bad, very bad, the filter chain contains an affine transform (that merely moves the image around) and a crop filter, both containing actual pixel values! So if we load a smaller size image we have to correct the filter chain for the new size. This is very ugly, but I did not find an easier method.

NSError *error;
CGSize originalImageSize = CGSizeMake([representation.metadata[@"PixelWidth"] floatValue],
                                      [representation.metadata[@"PixelHeight"] floatValue]);
NSData *xmpData = [representation.metadata[@"AdjustmentXMP"] dataUsingEncoding:NSUTF8StringEncoding];

EAGLContext *myEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
CIContext *context = [CIContext contextWithEAGLContext:myEAGLContext
                                options:@{ kCIContextWorkingColorSpace : [NSNull null] }];

CIImage *image = [CIImage imageWithCGImage:img];
NSArray *filterArray = [CIFilter filterArrayFromSerializedXMP:xmpData

if ((originalImageSize.width != CGImageGetWidth(img))
    || (originalImageSize.height != CGImageGetHeight(img))) {
    CGFloat zoom = MIN(originalImageSize.width / CGImageGetWidth(img),
                       originalImageSize.height / CGImageGetHeight(img));
    BOOL translationFound = NO, cropFound = NO;
    for (CIFilter *filter in filterArray) {
        if ([filter.name isEqualToString:@"CIAffineTransform"] && !translationFound) {
            translationFound = YES;
            CGAffineTransform t = [[filter valueForKey:@"inputTransform"] CGAffineTransformValue];
            t.tx /= zoom;
            t.ty /= zoom;
            [filter setValue:[NSValue valueWithCGAffineTransform:t] forKey:@"inputTransform"];
        if ([filter.name isEqualToString:@"CICrop"] && !cropFound) {
            cropFound = YES;
            CGRect r = [[filter valueForKey:@"inputRectangle"] CGRectValue];
            r.origin.x /= zoom;
            r.origin.y /= zoom;
            r.size.width /= zoom;
            r.size.height /= zoom;
            [filter setValue:[NSValue valueWithCGRect:r] forKey:@"inputRectangle"];

// filter chain
for (CIFilter *filter in filterArray) {
    [filter setValue:image forKey:kCIInputImageKey];
    image = [filter outputImage];

// render
CGImageRef editedImage = [context createCGImage:image fromRect:image.extent];

// do something with editedImage


Shared Photo-Streams, unavailable images and change notifications

Not everything the assets library has a thumbnail for is immediately available on the device (likely the case for shared photo streams that are currently syncing to the device). Or may be added while your app is currently running (app in background or a new image on the photostream taken with another device that shares the same iCloud account).

For this Apple posts a NSNotificationCenter notification to which your app has to react properly.

[[NSNotificationCenter defaultCenter] addObserverForName:ALAssetsLibraryChangedNotification
                                                   queue:[NSOperationQueue mainQueue]
                                              usingBlock:^(NSNotification *note) {
                                                  // refresh GUI

Don't forget to deregister if the GUI is not visible anymore:

    [[NSNotificationCenter defaultCenter] removeObserver:self

If the user is a heavy user of shared photo streams you might get a lot of these, so you can disable notifications to those by calling [ALAssetsLibrary disableSharedPhotoStreamsSupport].

Be aware that you may run into assets that have only synced partly (thumbnails are there but no default representation), if you get a nil for ALAsset's defaultRepresentation this is the case and you'll have to wait for a ALAssetsLibraryChangedNotification to be fired and try again.

Wrap up

To make life a little easier to you (and to myself) I wrote some categories to ALAssetRepresentation and ALAssetsLibrary to provide all the mentioned improvements via an easy to use API and published them on Github. I BSD-licensed them to make them available to all people to use whereever they see fit. Have fun!