A 2-post collection

Loading an image with ALAssetsLibrary

... and what Apple doesn't tell you (at least directly).

Access control

Usually you want to load an image from the user's photo library and just use it. Apple implemented various ways for you to get your hands on the image data in iOS 3 and 4 but the problem with that was privacy concerns, not about the image data itself but the metadata.

So in iOS 4 and 5 if you wanted to use ALAssetsLibrary the system popped up a message informing the user that the app wants to use location data. That confused many people because most of them were not aware that the images themselves contained geo-tags that allowed to track the user. So many apps defaulted to using the photo-picker supplied by Apple to avoid that message. It was a customer experience nightmare and prompted so many bad reviews:

"The App wants location access but it just should apply a filter to an image! 1 Star"
(some AppStore user)

But why is that? Could Apple not just filter out the geo-tags when the user declined? No it could not because the API was implemented a bit brain dead on the first look, ALAssetRepresentation has a method getBytes:fromOffset:length:error: that gives the programmer of an app direct access to the underlying file for the asset with all embedded metadata and stuff. It is not so dumb as it looks at first to give the programmer access to the data directly, but we get to that later.

So in iOS 6 Apple introduced "Privacy settings" that includes a switch for photo access, one for location, one for the microphone and so on. So if you want to load an image through the assets library methods you'll have to ask for permission first. This will be done implicitly (on first use) but you'll have to check for denial manually by calling [ALAssetsLibrary authorizationStatus] before presenting a UI to the user that may fail to display content.

switch ([ALAssetsLibrary authorizationStatus]) {
    case ALAuthorizationStatusAuthorized:
        // User allowed access
        // fallthrough
	case ALAuthorizationStatusNotDetermined:
		// Ok cool, go ahead, use the assets library,
        // user will be prompted to allow access

		// show UI to use library
	case ALAuthorizationStatusRestricted:
		// Access is restricted by administrator of device and cannot be changed
		// Do not display UI to browse or use Assets library
	case ALAuthorizationStatusDenied:
		// User denied access, display UI to explain user how to enable access

In iOS 6 the users were still confused because most app-designers opted for asking the user on app startup to allow access to the assets library if they used it (to not have to check for access all the time in code). Sometimes the users did not directly recognize why the app needs access and denied, so the programmers had to include instructions how to re-enable access. All in all not the best UX one may wish for. In iOS 7 Apple included a key into the Info.plist file (NSPhotoLibraryUsageDescription) to modify the prompt that the app shows to the user to allow for a little more explanation what the app intends to to with the access rights given by the user. That's much better but still cumbersome.

Memory constraints

With the first iPad Apple introduced the camera connection kit (CCK) because the cloud solutions provided by Apple and others were not in the usable shape they are now. Apple wanted the users to use the iPad for photo viewing so why not allow direct import from digital cameras via USB or SD-Card?

Now you'll have to see, the iPhone 3gs and iPhone 4 had relatively low resolution cameras at that time (3 and 5 megapixels) so decompressed image sizes were usually between 12 and 20 megabytes, but you'll know what happens if the user connects a 17 megapixel camera? Image sizes blow up easily to around 70 megabytes of decompressed size. Be aware, those devices in that time only had about 256 Megabytes of RAM available (of which about 100 MB were used for the system and and about 64 MB went straight to the GPU) so we were in bad luck opening such an image with the default iOS 3 image picker that delivered the full size image decompressed as an UIImage instance. If you wanted to do anything other than just displaying an image on the screen you had to be very careful to not catch the dreaded memory warning.

So Apple invented the ALAssetsLibrary-API to help you out, with that you could enumerate saved assets, load default thumbnails, load aspect ratio thumbnails and fetch "full screen images" for most of the apps available those low resolution variants were more than enough. The API had its own fair share of bugs but it usually worked.

But what if you wanted something bigger? Now we get to ALAssetRepresentation's getBytes:fromOffset:length:error: function. With that you can, if you know your way around core graphics, load images of all sizes you desire without wasting RAM. But that is really butt ugly, why not implementing a loadImageOfSize: function into ALAssetRepresentation? Only Apple knows.

CGFloat maxSideLen = 1000.0;
ALAssetRepresentation *representation;

CGDataProviderDirectCallbacks callbacks = {
  .version = 0,
  .getBytePointer = NULL,
  .releaseBytePointer = NULL,
  .getBytesAtPosition = getAssetBytesCallback,
  .releaseInfo = releaseAssetCallback,

CGDataProviderRef provider = CGDataProviderCreateDirect((void *)CFBridgingRetain(representation), [representation size], &callbacks);
CGImageSourceRef source = CGImageSourceCreateWithDataProvider(provider, NULL);

NSDictionary *options = @{(NSString *)kCGImageSourceCreateThumbnailFromImageAlways : @YES,
                          (NSString *)kCGImageSourceThumbnailMaxPixelSize : @(maxSideLen),
CGImageRef img = CGImageSourceCreateThumbnailAtIndex(source, 0, (__bridge CFDictionaryRef)options);

// Do something with img


To get that working we need the following C based functions somewhere outside of the implementation of the current class (in the same file):

static size_t getAssetBytesCallback(void *info, void *buffer, off_t position, size_t count) {
    ALAssetRepresentation *representation = (__bridge id)info;

    NSError *error = nil;
    size_t bytes = [representation getBytes:(uint8_t *)buffer fromOffset:position length:count error:&error];

    if (bytes == 0 && error) {
        NSLog(@"Error while reading asset: %@", error);

    return bytes;

static void releaseAssetCallback(void *info) {


If you try the above code or just use ALAssetRepresentation's fullSizeImage (Warning: don't do that! You'll crash your app because of that 100 megapixel image that the user saved on his iPad and didn't tell you. Your App will crash if you try to display a 400 MB image, trust me, I tried!) you'll see that the image is not neccessarily rotated the way you might assume. This is because of EXIF-Rotation, that's a meta data tag appended to images that were captured in an orientation that is not native to the camera sensor. Modern digital cameras read their camera sensor one specific way, if the user rotates his camera an accelleration sensor picks that up and the logic writes an orientation tag to the image. So the physical image on disk is saved in native, to the camera sensor, orientation and a hint is saved that the image viewer has to rotate the image before display. You could fetch that orientation information from the ALAssetRepresentation and rotate the image by yourself (or just initialize an UIImage with its imageWithCGImage:scale:orientation: initializer and tell it the orientation) or you could let the system handle that for you.

If you use Core Graphics to load the image (as I am suggesting you do!) this is relatively easy, just add one entry to the options dictionary: (NSString *)kCGImageSourceCreateThumbnailWithTransform : @YES
But be aware that you'll effectively reset the orientation to ALAssetOrientationUp and have to ignore the orientation-property of the ALAssetRepresentation

iOS 6 Photos-App Effects and Cropping

In iOS 7 Apple introduced camera effects (like Instagram) and in iOS 6 it allowed the users to crop and rotate images in the Photos app. As you might imagine this complicates image loading further.

If you look to Core Image closely you'll see the following function pop out: CIFilter filterArrayFromSerializedXMP:inputImageExtent:error:
So this will return an Array of CIFilter instances to apply all filters that the user used on the image to the image by yourself. Why do we need that? Because if you fetch the fullSizeImage or get to the image data by using getBytes:fromOffset:length:error: you'll always get the original image data, unmodified by any filter.

Ok you might think: "This is easy, I'll load the image from the assets library by using core graphics to avoid beeing bombed out by a memory warning and then apply that filter chain to the image an be done."

But not so fast! Remember that I said the user is able to crop the image? Let's see how that cropping is implemented in the filter chain:

(lldb) po filterArray
    "<CIAffineTransform: inputImage=nil inputTransform=CGAffineTransform: {{1, 0, 0, 1}, {-108, -965}}>",
    "<CICrop: inputImage=nil inputRectangle=[0 0 1541 973]>"

What you can see here is bad, very bad, the filter chain contains an affine transform (that merely moves the image around) and a crop filter, both containing actual pixel values!
So if we load a smaller size image we have to correct the filter chain for the new size. This is very ugly, but I did not find an easier method.

NSError *error;
CGSize originalImageSize = CGSizeMake([representation.metadata[@"PixelWidth"] floatValue],
                                      [representation.metadata[@"PixelHeight"] floatValue]);
NSData *xmpData = [representation.metadata[@"AdjustmentXMP"] dataUsingEncoding:NSUTF8StringEncoding];

EAGLContext *myEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
CIContext *context = [CIContext contextWithEAGLContext:myEAGLContext
                                options:@{ kCIContextWorkingColorSpace : [NSNull null] }];

CIImage *image = [CIImage imageWithCGImage:img];
NSArray *filterArray = [CIFilter filterArrayFromSerializedXMP:xmpData

if ((originalImageSize.width != CGImageGetWidth(img))
    || (originalImageSize.height != CGImageGetHeight(img))) {
    CGFloat zoom = MIN(originalImageSize.width / CGImageGetWidth(img),
                       originalImageSize.height / CGImageGetHeight(img));
    BOOL translationFound = NO, cropFound = NO;
    for (CIFilter *filter in filterArray) {
        if ([filter.name isEqualToString:@"CIAffineTransform"] && !translationFound) {
            translationFound = YES;
            CGAffineTransform t = [[filter valueForKey:@"inputTransform"] CGAffineTransformValue];
            t.tx /= zoom;
            t.ty /= zoom;
            [filter setValue:[NSValue valueWithCGAffineTransform:t] forKey:@"inputTransform"];
        if ([filter.name isEqualToString:@"CICrop"] && !cropFound) {
            cropFound = YES;
            CGRect r = [[filter valueForKey:@"inputRectangle"] CGRectValue];
            r.origin.x /= zoom;
            r.origin.y /= zoom;
            r.size.width /= zoom;
            r.size.height /= zoom;
            [filter setValue:[NSValue valueWithCGRect:r] forKey:@"inputRectangle"];

// filter chain
for (CIFilter *filter in filterArray) {
    [filter setValue:image forKey:kCIInputImageKey];
    image = [filter outputImage];

// render
CGImageRef editedImage = [context createCGImage:image fromRect:image.extent];

// do something with editedImage


Shared Photo-Streams, unavailable images and change notifications

Not everything the assets library has a thumbnail for is immediately available on the device (likely the case for shared photo streams that are currently syncing to the device). Or may be added while your app is currently running (app in background or a new image on the photostream taken with another device that shares the same iCloud account).

For this Apple posts a NSNotificationCenter notification to which your app has to react properly.

[[NSNotificationCenter defaultCenter] addObserverForName:ALAssetsLibraryChangedNotification
                                                   queue:[NSOperationQueue mainQueue]
                                              usingBlock:^(NSNotification *note) {
                                                  // refresh GUI

Don't forget to deregister if the GUI is not visible anymore:

    [[NSNotificationCenter defaultCenter] removeObserver:self

If the user is a heavy user of shared photo streams you might get a lot of these, so you can disable notifications to those by calling [ALAssetsLibrary disableSharedPhotoStreamsSupport].

Be aware that you may run into assets that have only synced partly (thumbnails are there but no default representation), if you get a nil for ALAsset's defaultRepresentation this is the case and you'll have to wait for a ALAssetsLibraryChangedNotification to be fired and try again.

Wrap up

To make life a little easier to you (and to myself) I wrote some categories to ALAssetRepresentation and ALAssetsLibrary to provide all the mentioned improvements via an easy to use API and published them on Github. I BSD-licensed them to make them available to all people to use whereever they see fit. Have fun!

C compiler macros and attributes to help you

"So what?", I hear you asking, but this will not be just another list of useful preprocessor macros to use (or not use) in your programs or libraries. I want to show you what you can do to make your code easier to understand, both for people and machines.

Some things mentioned here will be common knowledge for the seasoned C/C++/objective C programmer, but some things I found and am using today were buried deep in obscure system header files or compiler documentation. I want to spare you my journey finding them one by one and then refactoring your code to make use of it one thing at a time (you do refactor your old code don't you?).

Some things mentioned here will be compiler or vendor specific, but I will mention where it works and if it is a good idea to use it. Some things will be useful only if you run a static analyzer on your code as they are merely hints to clarify for ambiguities in the code. Others will give the compiler hints about optimization possibilities, and even others are used to make your API more readable to the humans that use it and even warn them if they are using it wrong.

Platform detection

#if defined(__linux__)
    /* Linux */
#elif !defined(_WIN32) && (defined(__unix__) || defined(__unix) || (defined(__APPLE__) && defined(__MACH__)))
    #include <sys/param.h>
    #ifdef __APPLE__   
        #include "TargetConditionals.h"
            /* iOS Simulator */
        #elif TARGET_OS_IPHONE
            /* iOS Device */
            /* OSX */
    #elif defined(BSD)
        /* BSD */
#elif defined(_WIN64)
    /* Windows x64 */
#elif defined(_WIN32)
    /* Windows 32 Bit */
#elif __posix
    /* At least POSIX compatible */
#elif __unix
    /* some other UNIX not catched above */


Things that work on more than one compiler and platform. Much of this will be preprocessor magic or written down in the standards, but to each thing there are caveats.

  • inline / noinline
    explicitly define if a function may be inlined. Inlining functions means
    to insert the function verbatim at the calling position instead of creating
    a branch and really call that function. This may speed up your code if used
    correctly but can increase generated machine code size. Only use for short
    "hot" functions and keep an eye on performance and size of your binary.
    Do not assume inlining speeds up your code. Increased code size may mean you'll
    probably end up ruining CPU cache locality. See also: Article of the
    WebKit crowd to reducing code size to speed up WebKit
  • __GNUC__
    this preprocessor macro is defined for all compilers that support the GNU C
    compiler extensions (at least GCC and clang do)


Things that primarily work on Apple platforms (official clang/LLVM compiler is assumed).


apply this attribute to the init function of your class that should be called by all other init-functions instead of [super init] and thus by subclasses calling up into super You'll get a warning if you forget to call the designated initializer and so could be sure all initialization goes the same path and all internal state is really set at the end of the custom init function.

In use it looks like this:

#if __has_attribute(objc_designated_initializer)
#define NS_DESIGNATED_INITIALIZER __attribute((objc_designated_initializer))

- (instancetype)init NS_DESIGNATED_INITIALIZER;
- (instancetype)initWithURL:(NSURL *)URL;

So if we forget to call init from initWithURL we get the following warning:

Warning: Secondary initializer should not invoke an initializer on 'super'
Warning: Secondary initializer missing a 'self' call to another initializer


Use instancetype to return instances of an initialized class instead of id to aid the compiler type checking. On init functions it is assumed the function returns an instance of the class object on which we called init but that is not the case for class constructor methods (often provided for convenience reasons to make the code more readable), because the compiler does not know the intent of the function.


[[[NSArray alloc] init] fooBar];

leads to:

No visible @interface for `NSArray` declares the selector `fooBar`

But the following will not lead to a warning (at least if it does not use instancetype):

[[NSArray array] fooBar];

If we'd declare the convenience function the following we'd get that first warning again:

+ (instancetype)array;


This one is again used for convenience to allow the compiler and Xcode to aid you with warnings and autocompletion when using typedef enum declarations.

Usually you'll declare a set of mutually exclusive flags or states the following:

typedef enum {
} ExampleEnum;

ExampleEnum fooBar;

If you do that, the compiler decides for you what datatype is used (usually int) and you'll have no control over that. To avoid the uncertainity a lot of people used the following construct to mitigate that:

typedef enum {

typedef NSInteger ExampleEnum;
ExampleEnum fooBar;

But that leads to a disconnect of the type name and the possible values, so autocompletion and switch ... case warnings won't work. So Apple invented NS_ENUM that ties those things together again:

typedef NS_ENUM(NSInteger, ExampleEnum) {

ExampleEnum fooBar;


If you want to use an enum as some kind of flag storage where flags are not mutually exclusive there is NS_OPTIONS. It works the same way NS_ENUM works but allows the variables of the defined type to contain combined values of multiple flags added together (= logical OR).

So if you have something like this:

typedef enum {
	FirstFlag  = 0,
    SecondFlag = (1 << 0),
    ThirdItem  = (1 << 1),
    FourthItem = (1 << 2)
} ExampleFlags;

ExampleFlags fooBar;

instead use this:

typedef NS_OPTIONS(NSInteger, ExampleFlags) {
	FirstFlag  = 0,
    SecondFlag = (1 << 0),
    ThirdItem  = (1 << 1),
    FourthItem = (1 << 2)

ExampleFlags fooBar;


Mark a function in any of your class that it needs a call to super to work correctly if subclassed and overridden. This just adds a warning if you forget to call super on the subclass. Can be a lifesaver when you'll have to debug strange behaviour or ship libraries. It tells the user of the function to make sure he knows what he's doing when he overrides the function.

Just append to your function declaration:

- (NSString *)debugMe NS_REQUIRES_SUPER;


Add format string typechecking, likely known from NSLog and printf, to your functions. The macro takes 2 arguments, the first one is the position of the format string argument, the second is the position of the ... element for the elements to include in the string.

Use like this:

void MyLog(NSString *fmt, ...) NS_FORMAT_FUNCTION(1,2);

So the compiler checks if the types of the arguments after the format string match the types defined in the format string. Additionally it will be checked if the format string is a constant string and a warning will be issued if it is not.