i3Factory World

Your Iphone, iPad & Android Application Factory

Browsing Posts in Research

IOS_OSX_maps_OFFLINE

New MapKit features

In WWDC 2013 Apple introduced many new MapKit features for iOS 7 and added this framework in OSX 10.9 (Mavericks). One of the major changes, which in my opinion didn’t get enough relevance in the developer community, has been the introduction of some base classes that allow full map customization and support for offline maps. In this article I’m going to describe the new MKTileOverlay class and present an example, for both iOS and OSX, that demonstrates the new capabilities.

Since the earlier iPhone OS versions there were many apps in the App Store that were supporting maps different from the ones provided by the operating system: consider for example Navigation apps that required support for offline navigation, that is the possibility to see the map even without internet connection. Another requirements for some special kind of applications that needed to show proprietary information (such as “Yellow Pages” apps) or technical information (e.g. when there was the requirement to show level curves for mountains or to represent the sea level).

There were several issues due to this limitation: first of all the overall mapping experience was completely different of these different approaches each other and in most cases they were subpar if compared with the OS maps performance (either with Google or Apple data). Besides from the point of view of the developer there were the problem of providing the right mapping code to support the map provider data: there were no a unique solution, but many. Some were commercial and expensive, other were open source but with lack of support and finally there were a lot of web-browser based solutions whose performances were far from the native maps other than difficult to integrate with Objective-C.

What we’re going to show in this article is how these things changed drastically and how it is easy to integrate your own map content inside the common MapKit framework.

Map overlays

At the base of our discussion there is the concept of “map overlay”. This is not new in MapKit, but with iOS7 things changed. Overlays are essentially parts of a map that can be overlayed over the base map, that is the part of the map representing the ground, the borders, the roads, and so on. Typically the usage of overlayed is to emphasize some regions of the map having a common property: e.g. to highlight a specific country or to represent the several intensities of an earthquake that occurred in a certain area or finally to highlight a road path in a navigation app.

From the point of view of the developer, an overlay is any object that conforms to the MKOverlay protocol. This protocol defines the minimum properties and methods required to define an overlay: their are the approximate center coordinate and the bounding box the fully encloses the overlay (boundingMapRect). These two properties allow MapKit to determine if a specific overlay is currently visible or not in the map so that the framework can take the actions needed to display this overlay. When an overlay object is added to the map using one of the MKMapView’s addOverlay: methods the control passes to the framework which, when determines that a specific overlay needs to be displayed, calls the map view delegate asking him to provide the graphical representation of the overlay. Before iOS7 Apple was providing a set of concrete MKOverlay compatible classes and they were associated to their corresponding MKOverlayView. E.g. to represent a circular overlay we could use the built-in MKCircle class and then provide, for rendering, the associated MKCircleView class, without the need to define our own object.

With iOS7 things changed: now the MKOverlayView has been replaced by the MKOverlayRenderer. Even if this changesdoesn’t require difficult refactoring to translate the code from pre-iOS7 to iOS7, thanks to the fact that Apple did a 1:1 mapping of methods from the old class to the new class, conceptually the change is significant: now the graphical representation of the overlay is no more provided by a UIView subclass, which is typically considered a heavy class, but it is provided by a class, MKOverlayRenderer, which is much more lightweight and descends directly from NSObject. However the mapping between the old and new class is complete, so in the circle example we can see MKCircleView replaced by MKCircleRenderer.

Finally overlays are stacked on the map, so they are given a Z-index that provides the relative positions of overlays each other and with the fixed parts of the map. Before iOS7 you could stack this overlays and define their positions as in an array, with iOS7 two stacks are defined in fact, and they are called “levels”: one level is the “above the roads level”, the other level is the “above the labels level”. This is an important and useful distinction because now we can change how the overlay rendering interacts with the map by specifying if it lies above or below the labels.

Tile overlays

Whatever is the complexity and size of the overlay, we have seen them up to now overlays as specific shapes. With the new MapKit provided with iOS 7 and OSX Mavericks, there is a new type of overlay called tiled overlay. You may consider this type of overlay as a particular layer the covers the whole map: due to its large dimensions this overlay is tiled, that is it is partitioned in bitmap areas to reduce the memory required to show the data and make the overlay rendering efficient. The purpose of this concrete implementation of the MKOverlay protocol, called MKTileOverlay (together with its rendering counterpart given by the MKTileOverlayRenderer class), is to efficiently represent the whole set of tiles across the map plane and for different zoom levels. This last point is important: when you’re displaying a map using bitmap drawing (to be compared with vector drawing) you can get an efficient implementation only if the specific bitmap representing a part an area of the map has the right details suitable for the current zoom level. This means that if we show the full Europe map we don’t need to present road and cities should be represented as points and only for the major ones; as soon as we zoom in in a specific area then we cannot continue to represent the area by scaling the same tile, because it doesn’t contain the required information and also because we would see evident scaling effects. The solution to this is to divide the continuous allowed zoom range in discrete levels and for each level provide the required set of tiles that will show the details appropriate for that levels. It is evident that if we keep the single bitmap tile size constant (e.g. 256 x 256 pixels) then for each zoom level we must increse the number of tiles by a factor of 4: you can see this in the picture below: the single european tile at zoom level 3, when zoom to zoom level 4 has been split, and furtherly details, with four new different tiles having all the same size of the original tile.

 

map_tiles_offline-mapping

URL templates

The tiled overlay class works efficiently as it does a lazy loading of the tiles: this means that a bitmap tile is looked for and loaded only when it needs to be displayed. In order to know the location of the tile, the developer must define in the tile overlay definition the so-called URL template. This is a string representing a template for the final URL that will be used to retrieve the tile: this template will contain some placeholders that will be replaced by effective values to get the final URL. Each tile can be characterized by 4 parameters: x and y for the tile indexes in the map plane, z for the zoom level and finally scale for the bitmap image resolution (scale factor). The corresponding placeholders for these parameters are: {x} {y} {z} {scale}. So as an example, the OpenStreetMap template URL will be http://c.tile.openstreetmap.org/{z}/{x}/{y}.png and then the tile with index X=547 Y=380 and zoom level Z=10, that fully encloses the city of Rome, will be represented by the URL: http://c.tile.openstreetmap.org/10/547/380.png (see below the image taken from our OSX demo app).

iOS_offline_map_rome_tile

Note that a URL template can be an http:// template to retrieve tiles from the internet, but it could also be a file:// template if we want to retrieve files from the disk: in this way we can save our tiles in the application bundle, or download and install a full tiles package for a certain city, and then display maps even if the device is not connected to the internet.

The mechanism that is used by the framework to translate a required tile coordinate (x;y;z;scale) to an effective bitmap is composed of several steps: this gives the developer the possibility to hook its own code to effectively customize the way the tiles are generated. This can be done by subclassing MKOverlayTile. Note that this is not required if setting the URL template is enough for you.

When the map framework needs a specific map tile, it calls the loadTileAtPath:result: of the MKOverlayTile class (or subclass):

1
- (void)loadTileAtPath:(MKTileOverlayPath)path result:(void (^)(NSData *tileData, NSError *error))result;

The first method argument is called path and is a MKTileOverlayPath structure which contains the tile coordinates:

1
2
3
4
5
6
typedef struct {
	NSInteger x;
	NSInteger y;
	NSInteger z;
	CGFloat contentScaleFactor; // The screen scale that the tile will be shown on. Either 1.0 or 2.0.
} MKTileOverlayPath;

The second method argument is a completion block that needs to be called when the tile data has been retrieved: this completion block will be called by passing the data and an error object. The MKTileOverlay default implementation will call the -URLForTilePath method to retrieve the URL and then it will use NSURLConnection to load the tile data asynchronously.

If we want to customize the tile loading behaviour we can easily subclass MKTileOverlay and redefine the loadTileAtPath:result: with our implementation of the loading mechanism. E.g. we can implement our own tiles caching mechanism (other than the one provided by the system via NSURLConnection) to return the cached data before triggering the network call; or we could watermark the default tile if we are shipping a freemium version of our offline map.

A more light way to hook into the tile loading mechanism is to redefine in our subclass the -URLForTilePath: method:

1
- (NSURL *)URLForTilePath:(MKTileOverlayPath)path;

The purpose of this method is to return the URL given the tile path. The default implementation is just to fill-out the URL template, as specified above. You need to redefine this method if the URL template mechanism is not sufficient for your needs. A typical case is when you want to pass in the URL a sort of “device identifier” to validate the eligibility of that specific app to access the URL (e.g. if you provide a limit to the quantity of data that can be accessed by a user on a given time or if you want to charge for this data), another case if you have multiple tile servers and you want to do a sort of “in-app” load balancing or regional-based API access (e.g. you have servers in multiple locations and based on the effective device location you want to access the closer server).

The tile renderer

As all overlays are associated to a renderer, also the tile overlay has its concrete renderer class: MKTileOverlayRenderer. Normally you don’t need to subclass this renderer so your map delegate’s -mapView:rendererForOverlay: method can simply instantiate the default tile overlay renderer initialized with your default or subclassed tile overlay instance. Possible applcations of a custom overlay renderer are when you need to further manipulate the bitmap image, e.g. adding a watermark or applying a filter, and this manipulation is independent from the tile source. In the demo code I defined a custom renderer to be used specifically for the Google map, whose effect is to add a sort of colored translucent mosaic on top of the map tiles.

The demo code

You can get the demo code from Github. This code works on both iOS 7 and OSX 10.9 and its purpose is to present a map and give the user the possibility to switch between different tile set: Apple (system), Google, OpenStreetMap and offline from a subset of OpenStreetMap tiles bundled within the app. In all cases I applied an extra overlay layer to show the tile grid with the x,y,z path associated to each grid. (Note: in OSX if you don’t code sign the app using your OSX Developer Program certificate, you will not be able to see the Apple tiles: the other three tile sets will be visible instead). You will see how you can fully take advantage of all features common to the MapKit (zoom, rotation, pan, custom overlays and also annotations which I didn’t include in the demo) and the only difference is in the tiles source and how they are rendered.

 

map-offline-tiles-demo_code

As you can see in the demo apps, there is a main view controller (iOS) and window controller (OSX). In both cases the main view contains an instance of MKMapView and a segmented control to switch between different visualizations. On the map I have instantiated two overlays. The first one is the grid overlay:

1
2
3
4
 // load grid tile overlay
 self.gridOverlay = [[GridTileOverlay alloc] init];
 self.gridOverlay.canReplaceMapContent=NO;
 [self.mapView addOverlay:self.gridOverlay level:MKOverlayLevelAboveLabels];

This is a tile overlay of subclass GridTileOverlay. It will not replace the map content (this means that is effectively overlayed on the map content) and its purpose is to draw, just above labels, the tiles grid.

The reloadOverlay method is called each time the overlay type selector is changed or when the view is loaded. It removes any existing tileOverlay and replaces it with a new one:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
-(void)reloadTileOverlay {

	// remove existing map tile overlay
	if(self.tileOverlay) {
		[self.mapView removeOverlay:self.tileOverlay];
	}

	// define overlay
	if(self.overlayType==CustomMapTileOverlayTypeApple) {
		// do nothing
		self.tileOverlay = nil;
	} else if(self.overlayType==CustomMapTileOverlayTypeOpenStreet || self.overlayType==CustomMapTileOverlayTypeGoogle) {
		// use online overlay
		NSString *urlTemplate = nil;
		if(self.overlayType==CustomMapTileOverlayTypeOpenStreet) {
			urlTemplate = @"http://c.tile.openstreetmap.org/{z}/{x}/{y}.png";
		} else {
			urlTemplate = @"http://mt0.google.com/vt/x={x}&y={y}&z={z}";
		}
		self.tileOverlay = [[MKTileOverlay alloc] initWithURLTemplate:urlTemplate];
		self.tileOverlay.canReplaceMapContent=YES;
		[self.mapView insertOverlay:self.tileOverlay belowOverlay:self.gridOverlay];
	}
	else if(self.overlayType==CustomMapTileOverlayTypeOffline) {
		NSString *baseURL = [[[NSBundle mainBundle] bundleURL] absoluteString];
		NSString *urlTemplate = [baseURL stringByAppendingString:@"/tiles/{z}/{x}/{y}.png"];
		self.tileOverlay = [[MKTileOverlay alloc] initWithURLTemplate:urlTemplate];
		self.tileOverlay.canReplaceMapContent=YES;
		[self.mapView insertOverlay:self.tileOverlay belowOverlay:self.gridOverlay];
	}
}

In the Apple maps case no extra overlay is added of course: we just use the base map. When we select to view the Google and OpenStreetMap we will use a standard MKTileOverlay class with the appropriate URL template. In both cases the overlay will be added with the canReplaceMapContent property set to YES: this replaces the Apple base maps completely and will avoid that those data will be loaded. Note that we add the tileOverlay just below the gridOverlay. Finally the offline case still uses a base overlay class but with a file URL template: note that we create the path from a hierarchical directory structure build inside the bundle. In this case too the new tiles replace the base ones and are inserted below the grid.

Our controller, being a delegate of MKMapView, responds to the -mapView:rendererForOverlay:. This is required by every application that uses overlays as this is the point where the app effectively tells the system how to draw an overlay that is currently visible in the map. In our case we just check that the overlay is a tile overlay (this is a general case to consider that fact that we might have other types of overlays) and based on the selection we use the standard MKTileOverlayRenderer or a custom renderer WatermarkTileOverlayRenderer. The latter is used to apply a randomly colored semi-transparent effect on top of the tiles, getting as a result a vitreous mosaic effect.

Conclusions

The possibility to easily switch between different map types but keeping the same “map navigation experience” is one of the most revolutionary features introduced with iOS 7, other than the longly awaited introduction of native maps inside OSX. This provides the same map infrastructure whatever is the content. Obviously the generation of custom map content is another huge and highly specialized task that we cannot cover here: but for developers this is a great step forward.

References

  • Location and Maps Programming Guide from Apple Developer Library
  • WWDC 2013 session 304 video: What’s new in Map Kit from Apple WWDC 2013 videos
  • MBXMapKit GitHub project by Mapbox – A simple library to intergrate Mapbox maps on top of MapKit, one of the first applications of tiled overlays

“viggiosoft github”

  • The GDAL project one of the main references for custom maps creation. Here is a link to a compiled version of the GDAL OSX Framework
  • Maperitive another great tool (for Windows only) to create custom maps and prepare them for offline usage
Posted by Carlo Vgiani

Why video composition

You may think that video composition should be limited to applications like iMovie or Vimeo so you can consider this subject, at least from the point of view of the developer, to be limited to a niche of video experts. Instead it can be extended to a broader range of applications, not essentially limited to practical video editing. In this blog I will provide an overview of the AV Foundation framework applied on a practical example.

In my particular case the challenge was to build an application that, starting from a set of existing video clips, was able to build a story made by attaching a subset of these clips based on decisions taken by the user during the interaction with the app. The final play is a set of scenes, shot on different locations, that compose a story. Each scene consists of a prologue, a conclusion (epilogue) and a set of smaller clips that will be played by the app based on some user choices. If the choices are correct, then the user will be able to play the whole scene up to its happy end, but in case of mistakes the user will return to the initial prologue scene or to some intermediate scene. The diagram below shows a possible scheme of a typical scene: one prologue, a winning stream (green) a few branches (yellow are intermediate, red are losing branches) and an happy end. So the user somewhere in TRACK1 will be challenged to take a decision; if he/she is right then the game will continue with TRACK2, if not it will enter in the yellow TRACK4, and so on.

iPhone & iPad: Movie Game Storyboard
What I have in my hands is the full set of tracks, each track representing a specific subsection of a scene, and a storyboard which gives me the rules to be followed in order to build the final story. So the storyboard is made of the scenes, of the tracks the compose each scene and of the rules that establish the flow through these tracks. The main challenge for the developer is to put together these clips and play a specific video based on the current state of the storyboard, then advance to the next, select a new clip again and so on: all should be smooth and interruptions limited. Besides the user needs to take his decisions by interacting with the app and this can be done by overlapping the movie with some custom controls.

The AV Foundation Framework

Trying to reach the objectives explained in the previous paragraph using the standard Media Framework view controllers, MPMoviePlayerController and MPMoviePlayerViewController, would be impossible. These conrollers are good to play a movie and provide the system controls, with full-screen and device rotation support, but absolutely not for advanced controls. Since the release of iPhone 3GS the camera utility had some trimming and export capabilities, but these capabilities were not given to developers through public functions of the SDK. With the introduction of iOS 4 the activity done by Apple with the development of the iMovie app has given the developers a rich set of classes that allow full video manipulation. All these classes have been collected and exported in a single public framework, called AV Foundation. This framework exists since iOS 2.2, at that time it was dedicated to audio management with the well known AVAudioPlayer class, then it has been extended in iOS 3 with the AVAudioRecorder and AVAudioSession classes but the full set of features that allow advanced video capabilities took place only since iOS 4 and they were fully presented at WWDC 2010.

The position of AV Foundation in the iOS Frameworks stack is just below UIKit, behind the application layer, and immediately above the basic Core Services frameworks, in particular Core Media which is used by AF Foundation to import basic timing structures and functions needed for media management. In any case you can note the different position in the stack in comparison with the very high-level Media Player. This means that this kind of framework cannot offer a plug-and-play class for simple video playing but you will appreciate the high-level and modern concepts that are behind this framework, for sure we are not at the same level of older frameworks such as Core Audio.

(image source: from Apple iOS Developer Library)

Building blocks

The classes organization of AV Foundation is quite intuitive. The starting point and main building block is given by AVAsset. AVAsset represents a static media object and it is essentially an aggregate of tracks which are timed representation of a part of the media. All tracks are of uniform type, so we can have audio tracks, video tracks, subtitle tracks, and a complex asset can be made of more tracks of the same type, e.g. we can have multiple audio tracks. In most cases an asset is made of an audio and a video track. Note that AVAsset is an abstract class so it is unrelated to the physical representation of the media it represents; besides creating an AVAsset instance doesn’t mean that we have the whole media ready to be played, it is a pure abstract object.


There are two concrete asset classes available: AVURLAsset, to represent a media in a local file or in the network, and AVComposition (together with its mutable variant AVMutableComposition) for an asset composed by multiple media. To create an asset from a file we need to provide its file URL:

NSDictionary *optionsDictionary = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
AVURLAsset *myAsset = [AVURLAsset URLAssetWithURL:assetURL options:optionsDictionary];

The options dictionary can be nil, but for our purposes – that is making a movie composition – we need to calculate the duration exactly and provide random access to the media. This extra option, that is setting to YES the AVURLAssetPreferPreciseDurationAndTimingKey key, could require extra time during asset initialization, and this depends on the movie format. If this movie is in QuickTime or MPEG-4 then the file contains additional summary information that cancels this extra parsing time; but the are other formats, like MP3, where this information can be extracted only after media file decoding, in such case the initialization time is not negligible. This is a first recommendation we give to developers: please use the right file format depending on the application.
In our application we already know the characteristics of the movies we are using, but in a different kind of application, where you must do some editing from user imported movies, you may be interested in inspecting the asset properties. In such case we must remember the basic rule that initializing an asset doesn’t mean we loaded and decoded the whole asset in memory: this means that every property of the media file can be inspected but this could require some extra time. For completeness we simply introduce the way asset inspection can be done leaving the interested user to the reference documentation (see the suggested readings list at the end of this post). Basically each asset property can be inspected using an asynchronous protocol called AVAsynchronousKeyValueLoadingwhich defines two methods:

– (AVKeyValueStatus)statusOfValueForKey:(NSString *)key error:(NSError **)outError
– (void)loadValuesAsynchronouslyForKeys:(NSArray *)keys completionHandler:(void (^)(void))handler

The first method is synchronous and immediately returns the knowledge status of the specified value. E.g. you can ask for the status of “duration” and the method will return one of these possible statuses: loaded, loading, failed, unknown, cancelled. In the first case the key value is known and then the value can be immediately retrieved. In case the value is unknown it is appropriate to call the loadValuesAsynchronouslyForKeys:completionHandler: method which at the end of the operation will call the callback given in the completionHandlerblock, which in turn will query the status again for the appropriate action.

Video composition

As I said at the beginning, my storyboard is made by a set of scenes and each scene is composed by several clips whose playing order is not known a priori. Each scene behaves separately from the others so we’ll create a composition for each scene. When we get a set of assets, or tracks, and from them we build a composition all in all we are creating another asset. This is the reason why the AVComposition and AVMutableComposition classes are infact subclasses of the base AVAsset class.
You can add media content inside a mutable composition by simply selecting a segment of an asset, and adding it to a specific range of the new composition:

– (BOOL)insertTimeRange:(CMTimeRange)timeRange ofAsset:(AVAsset *)asset atTime:(CMTime)startTime error:(NSError **)outError

In our example we have a set of tracks and we want to add them one after the other in order to generate a continous set of clips. So the code can be simply written in this way:

 

    AVMutableComposition = [AVMutableComposition composition];
CMTime current = kCMTimeZero;
NSError *compositionError = nil;
for(AVAsset *asset in listOfMovies) {
BOOL result = [composition insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration])
ofAsset:asset
atTime:current
error:&compositionError];
if(!result) {
if(compositionError) {
// manage the composition error case
}
} else {
current = CMTimeAdd(current, [asset duration]);
}
}

First of all we introduced the time concept. Note that all media have a concept of time different than the usual. First of all time can move back and forth, besides the time rate can be higher or lower than 1x if you are playing the movie in slow motion or in fast forward. Besides it is considered more convenient to represent time not as floating point or integer number but as rational numbers. For such reason Core Media framework provides the CMTimestructure and a set of functions and macros that simplify the manipulation of these structures. So in order to build a specific time instance we do:

CMTime myTime = CMTimeMake(value,timescale);

which infact specifies a number of seconds given by value/timescale. The main reason for this choice is that movies are made of frames and frames are paced at a fixed ration per second. So for example if we have a clip which has been shot at 25 fps, then it would be convenient to represent the single frame interval as a CMTime variable set with value=1 and timescale=25, corresponding to 1/25th of second. 1 second will be given by a CMTime with value=25 and timescale=25, and so on (of course you can still work with pure seconds if you like, simply use the CMTimeMakeWithSeconds(seconds) function). So in the code above we initially set the current time to 0 seconds (kCMTimeZero) then start iterating on all of our movies which are assets in . Then we add each of these assets in the current position of our composition using their full range ([asset duration]). For every asset we move our composition head (current) for the length (in CMTime) of the asset. At this point our composition is made of the full set of tracks added in sequence. We can now play them.

Playing an asset

The AVFoundation framework doesn’t offer any built-in full player as we are used to see with MPMovieViewController. The engine that manages the playing state of an asset is provided by the AVPlayer class. This class takes care of all aspects related to playing an asset and essentially it is the only class in AV Foundation that interacts with the application view controllers to keep in sync the application logic with the playing status: this is relevant for the kind of application we are considering in this example, as the playback state may change during the movie execution based on specific user interactions in specific moments inside the movie. However we don’t have a direct relation between AVAsset and AVPlayer as their connection is mediated by another class called AVPlayerItem This class organizations has the pure purpose to separate the asset, considered as a static entity, from the player, purely dynamic, by providing an intermediate object, the that represent a specific presentation state for an asset. This means that to a given and unique asset we can associate multiple player items, all representing different states of the same asset and played by different players. So the flow in such case is from a given asset create a player item and then assign it to the final player.

AVPlayerItem *compositionPlayerItem = [AVPlayerItem playerItemWithAsset:composition];
AVPlayer *compositionPlayer = [AVPlayer playerWithPlayerItem:compositionPlayerItem];

 

In order to be rendered on screen we have to provide a view capable of rendering the current playing status. We already said that iOS doesn’t offer an on-the-shelf view for this purpose, but what it offers is a special CoreAnimation layer called AVPlayerLayer. Then you can insert this layer in your player view layer hierarchy or, as in the example below, use this layer as the base layer for this view. So the suggested approach in such case is to create a custom MovieViewer and set AVPlayerLayeras base layer class:

// MovieViewer.h

#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
@interface MovieViewer : UIView {
}
@property (nonatomic, retain) AVPlayer *player;
@end

// MovieViewer.m

@implementation MovieViewer
+ (Class)layerClass {
return [AVPlayerLayer class];
}
– (AVPlayer*)player {
return [(AVPlayerLayer *)[self layer] player];
}
– (void)setPlayer:(AVPlayer *)player {
[(AVPlayerLayer *)[self layer] setPlayer:player];
}
@end

// Intantiating MovieViewer in the scene view controller
// We suppose “viewer” has been loaded from a nib file
// MovieViewer *viewer
[viewer setPlayer:compositionPlayer];

At this point we can play the movie, which is quite simple:

[[view player] play];
Observing playback status

It is relevant for our application to monitor the status of the playback and to observe some particular timed events occurring during the playback.
As far as status monitoring, you will follow the standard KVO based approach by observing changes in the status property of the player:

// inside the SceneViewController.m class we’ll register to player status changes
[viewer.player addObserver:self forKeyPath:@”status” options:NSKeyValueObservingOptionNew context:NULL];

// and then we implement the observation callback
-(void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context {
    if(object==viewer.player) {
        AVPlayer *player = (AVPlayer *)object;
        if(player.status==AVPlayerStatusFailed) {
      // manage failure
        } else if(playe.status==AVPlayerStatusReadyToPlay) {
      // player ready: manage success state (e.g. by playing the movie)
        } else if(player.status==AVPlayerStatusUnknown) {
      // the player is still not ready: manage this waiting status
        }
    }
}

Differently from the KVO-observable properties timed-events observation is not based on KVO: the reason for this is that the player head moves continuously and usually playing is done on a dedicated thread. So the system certainly prefers to send its notifications through a dedicated channel, that in such case consists in a block-based callback that we can register to track such events. We have two ways to observe timed events:

  • registering for periodic intervals notifications
  • registering when particular times are traversed

In both methods the user will be able to specify a serial queue where the callbacks will be dispatched to (and it defaults to the main queue) and of course the callblack block. It is relevant to note the serial behaviour of the queue: this means that all events will be queued and executed one by one; for frequent events you must ensure that these blocks are executed fast enough to allow the queue to process the next blocks and this is especially true if you’re executing the block in the main thread, to avoid the application to become unresponsive. Don’t forget to schedule this block to be run in the main thread if you update the UI.
Registration to periodic intervals is done in this way, where we ask for a 1 second callback whose main purpose will be to refresh the UI (typically updating a progress bar and the current playback time):

// somewhere inside SceneController.m
id periodicObserver = [viewer.player addPeriodicTimeObserverForInterval:CMTimeMakeWithSeconds(1.0) queue:NULL usingBlock:^(CMTime time){
[viewer updateUI];
}];
[periodicObserver retain];

// and in the clean up method
-(void)cleanUp {
[viewer.player removeTimeObserver:periodicObserver];
[periodicObserver release];
}

// inside MovieViewer.m
-(void)updateUI {
// do other stuff here
// …
// we calculate the playback progress ratio by dividing current position of playhead into the total movie duration
float progress = CMTimeGetSeconds(player.currentTime)/CMTimeGetSeconds(player.currentItem.duration);
// then we update the movie viewer progress bar
[progressBar setProgress:progress];
}

 

Registration to timed events is done using a similar method which takes as argument a list of NSValue representations of CMTime (AVFoundation provides a NSValue category that adds CMTime support to NSValue):

// somewhere inside SceneController.m
id boundaryObserver = [viewer.player addBoundaryTimeObserverForTimes:timedEvents queue:NULL usingBlock:^{
[viewer processTimedEvent];
}];
[boundaryObserver retain];
// inside MovieViewer.m
-(void)processTimedEvent {
// do something in the UI
}
In both cases we need to unregister and deallocate somewhere in our scene controller the two observer opaque objects; we may suppose the existence of a cleanup method that will be assigned this task:
-(void)cleanUp {
[viewer.player removeTimeObserver:periodicObserver];
[periodicObserver release];
[viewer.player removeTimeObserver:boundaryObserver];
[boundaryObserver release];
}

While this code is the general way to call an event, in our application it is more appropriate to assign to each event a specific action, that is we need to customize each handling block. Looking at the picture below, you may see that at specific timed intervals inside each of our clips we assigned a specific event.


The figure is quite complex and not all relationships have been highlighted. Essentially what you can see is the “winning” sequence made of all green blocks: they have been placed consecutively in order to avoid the playhead jumping to different segments when the player takes the right decisions, so the playback will continue without interruption and will be smooth. With the exception of the prologue track, which is just a prologue of the history and no user interaction is required at the stage, and is corresponding conclusion, simply an epilogue when the user is invited to go to the next scene, all other tracks have been marked by a few timed events, identified with the dashed red vertical lines. Essentially we have identified 4 kind of events:

  • segment (clip) starting point: this will be used as a destination point for the playhead in case of jump;
  • show controls: all user controls will be displayed on screen, user intercation is expected;
  • hide controls: all user controls are hidden, and no more user interaction is allowed;
  • decision point, usually coincident with the hide controls event: the controller must decide which movie segment must be played based on the user decision.

Note that this approach is quite flexible and in theory you can any kind of event, this depends on the fantasy of the game designers. From the point of view of the code, we infact subclassed the AVURLAsset by adding an array of timed events definitions. At the time of the composition creation, this events will be re-timed according to the new time base (e.g.: if an event is played at 0:35 seconds of a clip, but the starting point of the clip is exactly at 1:45 of the entire sequence, the the event must be re-timed to 1:45 + 0:35 = 2:20). At this point, with the full list of events we can re-write our boundary registration:

// events is the array of all re-timed events in the complete composition
__block __typeof__(self) _self = self; // avoids retain cycle on self when used inside the block
[events enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
TimedEvent *ev = (TimedEvent *)obj;
[viewer.player addBoundaryTimeObserverForTimes:[NSArray arrayWithObject:[NSValue valueWithCMTime:ev.time]]
queue:dispatch_get_main_queue()
usingBlock:^{
// send event to interactiveView
[viewer performTimedEvent:ev];
[_self performTimedEvent:ev];
}];
}];

 

 

As you can see the code is quite simple: for each timed event we register a single boundary which simply calls two methods, one for the movie viewer and one for the scene controller; in both cases we send the specific event so the receiver will know exactly what to do. The viewer will normally take care of UI interaction (it will overlay a few controls on top of the player layer, so according to the events these controls will be shown or hidden; besides the viewer knows which control has been selected by the user) while the scene controller will manage the game logic, especially in the case of the decision events. When the controller finds a decision event, it must move the playhead to the right position in the composition:

 

 CMTime goToTime = # determines the starting time of the next segment #
[viewer hide];
[viewer.player seekToTime:goToTime toleranceBefore:kCMTimeZero toleranceAfter:kCMTimePositiveInfinity completionHandler:^(BOOL finished) {
if(finished) {
dispatch_async(dispatch_get_main_queue(), ^{
[viewer show];
});
);
}];

 

What happens in the code above is that in case we need to move the playhead to a specific time, we first determine this time then we ask the AVPlayer instance to seek to this time by trying to move the head in this position or after with some tolerance (kCMTimePositiveInfinity) but not before (kCMTimeZero in the toleranceBefore: parameter; we need this because the composition is made of all consecutive clips and then moving the playhead before the starting time of our clip could show a small portion of the previous clip). Note that this operation is not immediate and even if quite faster it could take about one second. What happens during this transition is that the player layer will show a still frame somewhere in the destination time region, than will start decoding the full clip and will resume playback starting from another frame, usually different than the still one. The final effect is not really good and after a few experimentation a decided to hide the player layer immediately before starting seeking and showing it again as soon the player class informs me (through the completionHandler callback block) that the movie is ready to be played again.

Conclusions and references

I hope this long post will push other developers to start working on interactive movie apps that will try to leverage the advanced video editing capabilities of iOS other than for video editing. The AVFoundation framework offers us very powerful tools and which are not difficult to use. In this post I didn’t explore some more advanced classes, such as AVVideoComposition and AVSynchronizedLayer. The former is used to create transitions, the latter is use to synchronize core animation effects with the internal media timing.

Great references on the subject can be found in the iOS Developer Library or WWDC videos and sample code:

  • For a general overview: AVFoundation Programming Guide in the iOS Developer Library
  • For the framework classes documentation: AVFoundation Framework Reference in the iOS Developer Library
  • Video: Session 405 – Discovering AV Foundation from WWDC 2010, available in iTunesU to registered developers
  • Video: Session 407 – Editing Media with AV Foundation from WWDC 2010, available in iTunesU to registered developers
  • Video: Session 405 – Exploring AV Foundation from WWDC 2010, available in iTunesU to registered developers
  • Video: Session 415 – Working with Media in AV Foundation from WWDC 2011, available in iTunesU to registered developers
  • Sample code: AVPlayDemo from WWDC 2010 sample code repository
  • Sample code: AVEditDemo from WWDC 2010 sample code repository

 

Writed by Carlo Vigiani

Increasingly, our customers and readers submit proposals for developing an app without using standard documents and without using those terminologies that may help to understand properly how many and which features should have given the app.

If you want to convey an idea that allows you to create an app for iPhone, iPad or Android is necessary to prepare a clear Mockapp and / or Functional Document which will contain all the detailed features of the app for your iPhone or iPad or Android together with a layout that makes understanding the User Interface and User Experience.


Cooker App – Design, Mockup e Prototype Apps interfacce app in iOS

If you want to use your iPad as a tool to prepare the mockapp prototype, we mark App Cooker (website: http://www.appcooker.com/).

We will report instead of in the next article on what are the tools that you can use with your Mac or PC.

Many people have abandoned almost entirely the use of computers and rely on the iPad to send emails, take advantage of the Internet and use tools;

App Coocker  is an app for iPad and consider it a great tool if you want to bring your ideas to a stage of possible realization.

App Cooker, is present in the App Store at a price of € 15.99, which according to some rumors will increase to $ 24 the next version, the application (IPAD) is developed by HotAppsFactory and, as we have said, used to design applications iPhone and iPad ..

The ‘App Board will collect your conceptual plans, mock-ups, icons, App Store and pricing strategy.
It will be the backbone of your project and lets you work in an organized and clear.

Below is a video in English directly from the official site:

Define the ideas

We start with an idea, a sketch and use this app to organize the ideas and inspire.
The idea is the essence of any application and it requires time and careful consideration.
App Cooker provides a dedicated tool for this, providing valuable advice set out by Apple and other industry professionals.

 

iOS MockupiOS Mockups

The engine mockup supports orientation, simple links and unites assim the UI of Apple (UI) design with bitmaps, vector shapes, text and images.
Prototypes can come to life, without a single line of code.

The ‘trademark icon of the app

The icon is the face of your application. The creation of large icons requires experimentation and several attempts until the solution will not be found. Using the freehand tool, or images with vector shapes you can define the look of the icons of your ideas and see the results in various sizes and in no time.

Prices tool
App Cooker allows you to compare a large number of price scenarios to find the right model for your application. Supports both purchases app that advertising, which makes it easy to predict revenues, costs and profits.s

 

Descriptions App Store

Descriptions App Store

The description on the product page of iTunes is a deciding factor for potential buyers. App Cooker makes writing this information a simple task, and provides a place to locate in any of the App Store in 18 languages.

Mockapp before to the development of code

Designing a good application is difficult. You must have creativity, talent, resources, knowledge, time and a strong sense of self-criticism; the app to succeed are the result of a long process of refining.

We spent a long time, with many different clients and companies who contact us, to try to explain the best way to design applications for iPhone, iPad and Android.

With all the time, sooner or later discover that the design of an application is much more than just graphic design.

Below we enunciate the 10 principles that I have collected from the site of the App cooker:

 

In conclusion,

Cooker app is configured as a professional application, since it allows to design all the elements of an application compatible with all Apple devices IOS.

Addressing all those who intend to develop their own application, you must first make clear the initial idea, compatibility with various devices and various functions, then we must create the various graphic elements of the mockup, the icon , location on the App Store and deployment prospects and earnings.

Below we enunciate the 10 principles that I have collected from the site of the App cooker:

HTML(5) Approach
The final technique is something that is emerging now, especially thanks to the great improvements in term of stability and speed introduced by the latest version of iOS for the in-app web views. A couple of good examples of this approach are the Ars Technica app (link) and the Bloomberg Businessweek+ magazine (link).

The concept is quite simple: html and css are common and powerful techniques to layout a page on screen: why not leverage the skills developed by many web designers to make a magazine that perfectly fits with the iPad?
The core block at the base of this approach is the UIWebView Cocoa Touch object: with this view we can load any kind of html document, loaded locally or remotely, and layout it in the page at an adequate speed (but not the fastest) and without surprises. Besides we can get rid of the overlay
technique, as the web view is capable of displaying images, playing movies and of course execute javascript based widgets. Also this component provides a two way interaction between the javascript world and the objective-c runtime (and in fact this justifies the existence of extension languages
such as Objective-J, provided with the Cappuccino framework: http://cappuccino.org/). Finally the web view is highly respondent to user interactions, and some features like text selection and dictionary lookup come for free.
The open-source world is highly active in this area: projects like Baker (www.bakerframework.com), Siteless (www.siteless.org), Laker (www.lakercompendium.com) and pugpig (pugpig.com) make publicly available this kind of solution.

Sincerely we don’t know if this will be the final solution for everybody. Of course a publisher that already invested in setting up a web site (but not in Flash!), and this is quite common between newspapers, will be able to port most of the layout and contents to the iPad, and sometimes this can
be achieved with an adaption of the CMS output views to provide files that can be easily fed to the app.

Careful must be given to don’t push this behavior at its extremes: don’t forget in fact that web page rendering requires an inner engine and at the end any intermediate layer will require resources and extra time. Sometimes, and this is particularly evident with the first generation of the iPad, content
updates following user interaction are not very reactive. So it is not recommended to transform every single aspect of the magazine app into web based content: clearly in this way you’re helping all javascript developers not skilled with objective-c, but
a performance penalty will be visible.

As an example, the toolbar typical of all magazine apps used to access extra features (sharing, table of contents, home page, etc.) should always be done using the native Cocoa Touch component and not an html+css solution.

However if the publisher accepts to convert his design flow to a web based one and you, as developer, prefer to base your work on consolidated and easy to manipulate methodologies, this one should be your first choice to be taken in consideration.

Conclusions
We hope this article gives a good overview of the major techniques used to render pages in a magazine, newspaper or e-book. It could be we have not mentioned some technique we’re not aware of, in such case dear reader any feedback from you is welcome!

About the author: Carlo Vigiani
He is an electronics engineer and software developer, located in Italy. He is CTO and co-founder of new startup i3Factory.com, active in the development of iOS, Android and Win Mobile apps, with special focus on publishing, tourist and music apps.

Source: www.icodermag.com

01/2012

Pages Pre-Rendered by images
This technique is heavily used inside the highly interactive magazines published using the Adobe Digital Solutions environment: well known examples are the Condé Nast magazines (Wired is one of the most famous examples).
The way these magazines are implemented starts with the well known suite of Adobe Digital Publishing tools, In Design in primis. These tools are used by many publishers around the world and the latest versions offer the pos sibility to export the project, other than in the ubiquitous pdf format, in a package suited for distribution through iPad. The output of these files can be tested using the free app Adobe Content Viewer downloadable from the App Store, but of course the final branded app, together with the server infrastructure required to serve the contents, requires a higher tier license.

What characterizes this kind of magazines is that at the moment of project creation all pages are pre-rendered as jpeg or png images and then special effects are overlaid.
This means that the core section of the magazine reader is essentially an image viewer. Sure these images will span an area slightly larger than the iPad screen, so they will be embedded inside a scroll view, but they are still images. All in all technically the choice is not bad: the iPad is quite better in rendering images than PDF files, as the required calculations needed to transform the pdf data in bitmaps is completely skipped here, while the CPU will just need to decompress the image and send it to the graphics hardware. Exactly as we did in the PDF case, we can apply the overlay technique to over impose somecontent that requires user interaction on top of the bottom rendering layer.

While this technique is highly efficient from the point of view of rendering time, and is simple to implement as all the page layout complexities have been taken into account and solved by the desktop publishing tools, it offers a few limitations that need to be considered:

•     every single page takes quite more space on disk and download time of this kind of magazines is increased correspondingly; in comparison with a pdf page, the space taken is much more as every pixel of text must be provided in the file and we cannot force high compression ratios if we don’t want to introduce blurring in the text. The pdf page, especially those pages made of text only, is much lighter as the text is not pre-
rendered.

•     zooming or font resizing is not feasible: both pdf and core text redraw the text using vectorial algorithms or per-size font representations, this is not possible to achieve on a static image. This means that the magazine needs to be drawn with specific fonts types and sizes, fonts which are well suited for jpeg compression (no blur) and the screen resolution (132 dpi, not so high; things will be better with the next retina display iPad!)

•     text search, highlight and selection is impossible, unless the digital publishing tool exports together with the pre-rendered pages a full map of text coordinates, something I haven’t seen yet!

Adobe is not alone in publishing this kind of magazines:
there are several custom apps in the market that follow exactly the same approach. It’s not bad but is not leveraging the great publishing frameworks that Apple is offering to its developers. And it has too many limitations if compared with other techniques. For sure a publisher that is mastering the digital publishing tools I mentioned before can take advantage of this approach, as the final quality is undoubtable and the time to market is the shorter, and at the same time allows to provide a content suited for the iPad, and not just a pdf fit on screen.

But I would recommend to all developers that are making custom products and are not using specialized page composition tools to stay away from such methodology.

Source: www.icodermag.com

01/2012

 

CORE TEXT RENDERING
Core Text (short: CT) is another of those technologies developed for the Mac and later ported to iOS.
The Core Text framework is dedicated to text layout and font handling. Just to summarize the capabilities of this framework, consider that is at the base of the desktop publishing revolution that made the Mac famous in this professional sector.
As CG, even CT has a C-based API, even if there are several third-party open source wrappers that pack together the most common functionalities in a high-level Objective-C interface.

CT should not be used to replace web based rendering based on html and css, this is a too complex field that is better to leave to dedicated system components such as then UIWebView instead it can be used to efficiently render some rich text.

CT talks with CG, in fact text rendering is done at the same time of view Quartz based rendering. The two APIs have similar conventions and memory management rules, so the developer already accustomed with Core Foundation programming model will not find an hurdles in understanding the CT API. This gives the possibility to the developer to eventually mix the text rendering and image drawing at the same rendering stage (CT is limited to text only, it has no image drawing capabilities).

The main reason to use Core Text is because it does direct rendering of text on page without any intermediaries. It differs from PDF which consider each page as a whole, it differs from web based techniques as there is no intermediate language (html) or layout interpretation (css) in between, you can write directly on the page. The basic components behind CT are layout objects such as “runs”, which are direct translation of characters
into drawable glyphs, “lines” of characters and “frames”, which correspond to paragraphs. The translation of characters to glyphs is done by “typesetters” and the text to be plotted is provided using attributed strings, which are common strings enriched with attribute informations (font size, color, ornaments).

You will decide to use Core Text for a magazine whose layout will be mostly based on text with standard layout, so it fits well for newspapers also. Probably it’s not the best choice for glamour magazines where graphics layout is changing on every page and could be quite complicated.
A clear advantage of the Core Text based solution is that you don’t need to apply the overlay technique we mentioned in the paragraph dedicated to pdf. With CT you will directly divide your page in frames and each of these frames will contain text (rendered by CT) or multimedia. Essentially you can define the page layout by selecting a size (it can fit the iPad screen or it can be vertical or horizontal scrolling page), then you will decide the size and position of media content in this page and finally you will define the frames (several rectangular frames) that will contain the text. The text frames organization can be of any kind, from compact single column structures, two multicolumn layout or varying size frames. Inside the frames you will render the text and Core Text will help you to manage line breaks for these paragraphs. Then you can easily provide the user the possibility to change font type and size and the same rendering code can be reused to quickly rearrange the text inside the frames.

The page layout representation can be provided in any form decided by the developer together with the publisher, the best choice will be XML (all in all it’s the base of any markup format!) and it will be shipped to the app together with the texts (still XMLs) and the assets in a zip file package.
One limitation of Core Text is that it is a text drawing technology and is not optimized for editing (but we don’t need it at this stage) and user interaction. This means that if we want to provide text highlight or select and copy features we’ll need to implement them by our own; the framework provides us some APIs to facilitate this task but in any case the code to implement these functionalities must be written by the developer to manage every single detail. In any case all these tasks will be greatly simplified in comparison with PDF: here you have full control of the text and its position of screen, while pdf is still an opaque entity hidden behind a complex data structure that you cannot control in its entirety.

Our recommendation is that if you must implement a digital magazine, without extreme layout requirements, some multimedia content and a fast and powerful control of text, using Core Text is the first technology choice to be considered.

An excellent tutorial on the subject is available at this link on Ray Wenderlich blog: http://www.raywenderlich.com/4147/how-to-create-a-simple-magazine-app-with-core-text

Source: www.icodermag.com

01/2012

 

The Magazine is a PDF File
You may like it or not, but should your software house be committed to develop a magazine iPad app, the magazine will be with high probability given to you as a PDF file. As there is no way to “escape” from it, at the end you will need to develop your own pdf reader or integrate some free or  commercial external library.
The reason why pdf is still the dominant format in the e-publishing world is clear: most of the publishers are porting their existing printed  publications on the iPad, and for obvious budget reasons they want to reuse all the investment done in the creation of their issues. You will not be able to escape from the pdf format dictatorship with the exception of two cases: the publication is brand new and only digital, so there are no previous investments to drive the final choice, or the publisher has large budgets and/or is a strong user experience (UX) believer and accepts to allocate the extra budget to recreate a different format for its publications. Both cases are not so uncommon with those publishers that already did the effort to bring their products to the web (with the notable exception of those that did it in Flash!), but the large part of the small and medium publishers will
still be locked to the pdf format.

Unfortunately the pdf is not the best way to port a magazine in the iPad. And this for several reasons:

•     printed magazines page size is usually larger than the iPad screen: this means that when the page fits to the screen, all characters appear smaller and then something readable in the printed paper could become unreadable without zoom; but zoom is not always efficient and in particular it’s not loved by readers that may lose their “orientation” inside the page.

•     printed magazines pages have not the same aspect ratio of the iPad screen: this means that a page that fits in the screen will be bordered by top/bottom or left/right empty stripes.

•     often printed page layouts are optimized for facing pages, e.g. a panorama picture which is spread between two pages; when the device is kept in portrait orientation, these graphical details will be lost, instead if the device is kept in landscape you will be able to appreciate the two-pages layout but characters will be too small to be read comfortably.

•     as these files are not optimized for digital, normally the outlines (table of contents) and annotations (links to pages or external resources) are not exported; this means that even if your pdf reader code is aware of this information, in the majority of cases it is not available and then you will need to define a different way to provide it.

•     the official pdf format supports multimedia content; unfortunately the iOS is not able to manage it, so all interactive content must be provided  outside the pdf file.

The page rendering is achieved in iOS (and OSX too) through the Quartz 2D API, provided within the Core Graphics framework (shorted with CG). Quartz 2D is the two-dimensional drawing engine on which are based many (but not all) of the drawing capabilities of iOS. The
PDF API is a subset of the huge CG API. This API is “old fashioned” and is not based on Objective-C but on pure
old C; besides all memory management rules will follow the Core Foundation (CF) rules which are different from Obj-C one: this means that special attention must be provided to avoid memory leaks, as each PDF page manipulation can take several megabytes and leaks will easily trigger the memory watchdog, thus force quitting your app.

be immediate to render a PDF page, by following these basic steps:

1. get the CG reference to the pdf page to be drawn;
2. get the current graphics context for the view that will contain the page;
3. instruct Quartz to draw the pdf page to the context.

As you can see, apart the required steps needed by the drawing model of Quartz, the full rendering is accomplished by the system and you don’t need to have any knowledge of the data format of a pdf file. So for you the pdf rendering processor is just a black box, and this is clear when you
see that all CG data structures are in fact opaque and their inner contents can be accessed only via API.
But a valid pdf magazine reader cannot limit itself to rendering, so you will be required to support zoom. Now as your maximum zoom level can be theoretically very high (don’t forget that characters in the pdf file are like fonts in the computer, they will never lose in precision even for
extreme zoom-ins), it is impossible to render the full zoomed page in a canvas much higher than the device screen:
here we have pixels, not vectors, and it would be immediate to crash the app because all the memory has gone away for one page only. So you will be forced to introduce tiling techniques that will limit the effective rendering to the visible part of the page, not always an easy task.

More difficult is document parsing: this is required if you want to extract outlines, annotations, do some text search and highlight. In such case apart a few meta data extraction functions, what the API gives you is a set of functions that will allow you to explore the data structures inside the document. You will not be able to get any information from the file if you don’t explore the data tree correctly and if you don’t follow the specs of the PDF document.
This is worsened by the many versions the PDF specs got in the years and by the fact that many publishers still use old software that exports the  content in the old formats.
I have developed a general purpose PDF explorer, this was part of a commitment of a client that asked me to develop a general purpose PDF reader; but as it is really hard to apply all the specs of the PDF official reference, my suggestion is to concentrate on the most used features and test them with many documents. As I said before, CG navigates the data tree but it doesn’t interpret it for us!

The last section of this part, long explanation but required given the importance of the topic, is how to provide multimedia content on top of a PDF file: all in all the iPad is a so versatile device that we cannot limit ourselves to simple page rendering. By adding extra content to the printed page you can leverage the device characteristics and still taking benefit on the investment done in the magazine creation.

There are many reasons to justify this choice: e.g. a printed advertisement can offer a video instead of a static picture, or a printed link to a web page can be replaced by an active link to a web view, or finally we can show the current weather using an html5 widget. As I previously said it is not recommended to introduce all this content inside the pdf file: it will not be rendered by Quartz and you will still be forced to traverse the data tree to extract the CG object reference for further manipulation. Finally not all publishers are aware of these functionalities or their digital publishing software is too old to fully support them.

So the best solution is based on the “overlay technique”.
This methodology consists in representing the pages in two layers:

•     the bottom layer (“rendering layer”) will contain the PDF rendering, so it will contain the bitmap image of the page;
•     the top layer (“overlay layer”) will draw all overlays and is sensible to user touches.

The overlay layer is typically made of UIKit components, so we’ll add a UIWebView for html widgets, we’ll introduce a UIScrollView to display a gallery of sliding images, or we’ll add a Media Player view for video execution. Typically the overlay descriptions are provided on a separate file, e.g. an xml, json or plist, and they will be packed together with the pdf file and all assets (movies, images, html files, music
files) in a zip file.
The app will download the zip file, will unpack it and then for each page it will use the pdf page to fill the rendering layer, and the overlay information associated to that page to build the overlay layer.
Note that this technique can be applied also in the other rendering techniques we’ll talk about in the next paragraphs, in such case it allows to overcome many of the pdf format limitations. The major requirement for the deve loper is to define a suitable format, follow all page zoom
and rotations with a corresponding overlay transformation and finally provide the publisher with the instruments and
guidelines required to easily create such overlays.

source: www.icodermag.com

01/2012

 

This article was written to our CTO, Carlo Vigiani, for iCoder magazine

One of the great improvements in all iPad owners lifestyle is the possibility to bring everywhere any sort of magazine or book, thanks to the screen size and the device light weight which both facilitate reading and carrying. In particular reports demonstrated that in a printed publications decreasing market there is a huge increase in the number of subscriptions to the digital versions of the same product (the interested reader can read this report from MPA: http://www.magazine.org/association/press/mpa_press_releases/mag-mobile-reader-study.aspx)

Apple is following this trend with great interest, and this is quite clear if we take a look at the evolution of the iOS features that have been introduced since the release of the version dedicated to iPad, that is 3.2.
In the particular the milestones that have been reached are three, shared between three major releas es of the operating system:

•     iOS 3.2 was enriched by the CoreText framework, a technology dedicated to rendering text on display available since long time on Mac OSX and  never ported in the earlier versions of the iPhone OS.

•     iOS 4.x introduced the concept of auto-renewable subscriptions, as an addition to standard non consumable In App Purchases; this feature has been introduced after long discussions between Apple, that applies the 30% commission on every In App sale and forbids any other external cheaper store access within its devices, and the publishers looking for customer fidelity techniques.

•     finally iOS 5.0 added the Newsstand feature, which provides a central place to collect all magazine and newspaper apps and at the same time provide night-time content push to all subscribers, letting them to immediately read the latest issues of their publications and saving them for the extra time (sometimes long) required for the download.

What Apple didn’t provide instead is a common and unique developer platform dedicated to the creation of apps dedicated to the magazine consumption. This lead to a lot of initiatives dedicated to help publishers to enter in the iPad market with their own magazines. These initiatives were taken by major and well known companies, such Adobe with its Digital Publishing business, and a lot of many start-
ups, everyone with its own solution.

As I said, Apple doesn’t provide a unique solution, but developers have the availability of a set of frameworks and techniques, with different levels of complexity, that provide different way of representing the page on the screen.
There is not an optimal choice, as the final decision needs to take care of aspects that go beyond pure technical considerations.
In this article we will try to depict these solutions mainly from the app developer point of view, but will never forget to enumerate the pro and cons that can affect the publisher decision on which technology to adopt.

Page rendering overview
We assume that you, the developer, are in a certain point of your app development where the magazine has been purchased, downloaded and it’s ready to be read. Your document data at this point is safely stored in the device file system and it can be represented by a single pdf file, or a collection of html and css files or a directory containing assets of different formats, such as images, videos, html5 widgets, text files. You’re now facing the problem of taking one page (which can extend beyond the screen boundaries) and presenting it in the empty space of your UIView dedicated to the
page rendering.

In the next post I will present the following methodologies to achieve this result:

•     pdf document rendering
•     pre-rendered image display
•     free format CoreText rendering
•     web based approach

01/2012 – source: www.icodermag.com

 

Dear Publishers,

finally we made a system that allows you to publish magazines, books, newspapers,catalogs or any publication at no cost to each new issue or for every new player.

We cater to small publisher as the major publishing house, after having tested our prototypes, and after more than one year of development, i3Factory® is pleased to introduce a software system that allows you to publish your own issues without expensive investments on the App Store .

Through Apple’s App Store, Android Market or Amazon App Store,  your audience market will become the world’s online market, then the possibility of reaching readers around the world.

The costs of printing paper are more and more high and not allow the publisher of large print runs, and then plan to reach a geographically more wide.

With our publishing system, the costs of printing are canceled; readers browse your publication on the iPad tablet (and iPhone) and the cost for new publications will be always null.

We note that the experience of reading a magazine on the iPad and far more satisfactory experience of reading the same publication on paper.

 

SOME FEATURES

  1. Your Own Universal Application will be published on Apple® App Store;
  2. Unlimited publications from PDF files;
  3. No infrastructures costs: Host the publications on your own Internet or Intranet servers. Have 100% control and autonomy on your content;
  4. Offer your readers & audience the best mobile/tablet browsing experience with high definition texts and images, Videos and so much more;
  5. Wide audience: you pubblications will be ready Wold Wide;
Magazine using i3Factory editorial

 

 

ADVANTAGES

  • Economy of Scale: Buy one time license and create as many mobile publications as you wish in a just few clicks!
  • Earnings:Editors can offer prublication for free or not free.
  • Easy-to-use: Easily publish your magazione or publications from your  PDFs. i3Factory Editorial® technology automatically exports your links and your bookmarks from your PDF to your iPad & iPhone App.
  • Mobility: Consult your  publications offline , once downloaded the publication will be avaiable for  read it without you need any tipe of online connections.
  • Fast Download: all operation works on wifi or 3G data connections, Give your audience a great experience; with an internet connection the pages are immediately available as you flip through the document.
  • Sustainable Development: Go green. With i3Factory Editorial® all your publications have a positive carbon balance sheet. Help preserve our environment, save paper, reduce printing, save the trees and help decrease green house gases!
  • Personalization: Create your “Own Graphic interface” for your readers and a table of content for quick navigation.
  • Security: Host your publications on your own Internet or Intranet servers. Stay in full control of your interactive publications and your content (archives, subscription, sales campaign …).
  • Multimedia Content: Add clickable zones (go to page or links to websites) inside your interactive publication and/or PDF , HTML5. Engage readers withInteractivity & Videos from inside the pages of your publication.
  • Performance: You can find what you want in the blink of an eye.
  • Technology: i3Factory is a certified Application Factory. We are up to date with the latest technological developments, hence allowing us to provide you with the most high-performing tool on the market today.

    Be on the cutting edge of technology!

COSTS

Obviously prices will vary with respect to the need of the publisher, which normally requires some “customization”.

The starting price for our solution starts from 900 euros for small publishers, a solution that contains all the features necessary for most small-medium-sized publishers that starts from 1500 euros up to a maximum of € 5000 for medium and big publishers.

More Information on packages you can find on this editorial on this page:

New editorial system for iPad, iPhone & Android

or  direct on  i3F Editorial web site (http://i3factory.com/editorial)


 

Steve Jobs disse: “imagine you’re a professor teaching a class on how to write iPhone apps! You want people to mail apps around… you can get certified and register up to 100 iPhones, apps can be circulated and posted for up to 100 iPhones,”

Con il nuovo iOS 4, è possibile distribuire le applicazioni in modalità wireless senza l’intervento di iTunes.
Ovviamente si possono istallare le applicazioni solo in quei dispositivi (devices) in cui abbiamo creato adeguati profili id, e se si dispone già di questi file l’invio è davvero molto facile.

Innanzitutto, selezionate “Build and Archive”dal tuo menu di XCode. Il vostro progetto  viene depositato nella “Archived Applications” di XCode organizer (Window > Organizer).

Successivamente selezionare l’archivio che si desidera distribuire nel XCode organizer e seleziona “Share Application…” (“Condividi applicazione “)in fondo alla finestra. Scegli il profilo appropriato provisioning e poi “Distribute for Enterprise” (“Distribuisci per le Imprese”).

Nella finestra di distribuzione, inserire il titolo e l’URL completo per il file ipa (dove si prevede venga ospitata la vostra applicazione), per esempio, http://tuodominio.com/example.ipa.

Oltre al file plist ed i file ipa, vi servirà il profilo di fornitura e un semplice file index, ad esempio:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
        "http://www.w3.org/TR/html4/loose.dtd">
<html>
 <head>
 <title>La mia appliczione</title>
 </head>
 <body>
 <ul>
 <li><a href="http://miosito.com/example.mobileprovision">
 Installare un Provisioning File di esempio</a></li>
 <li><a href="itms-services://?action=download-manifest&url=http://miosito.com/example.plist">
 Installare Applicatione</a></li>
 </ul>
 </body>
 </html>

Con questi files caricati sul server tutto quello che dovete fare è comunicare il links del file index in modo che i vetri clienti possano selezionare il link per installare il profilo di provisioning e app direttamente da Safari Mobile sui dispositivi IOS. Una esperienza molto più rilassante ed efficace rispetto ad installare attraverso il processo di sincronizzazione di iTunes.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close