i3Factory World

Your Iphone, iPad & Android Application Factory

Browsing Posts in Tecnology

In an interesting article Matthew Panzarino asks, “’cause the look of the design iOS7 is so’ different from the previous” (Source: http://cdn.theapplelounge.com/wp-content/uploads/2013/06/iOS6vsiOS7_icons.png ).

After seeing the presentation of Apple iOS 7 now you know that the icons on the home screen (springboard cd) will change slightly.
Many of the new icons were designed mainly by members of the marketing and communications department at Apple, no more ‘by the development team of app. Jony Ive (now head of Human Interaction Apple) has guided step by step the design team to set the look and the color palette of the icons of the so-called stock app.

 

 

iOS7 Springboard

 

As has really changed the appearance of the icons of apps compared to iOS 6?
Let us see the comparison image created by @ pawsupforu:

ios6-iOS7_icone-confronto_icons-comparison
Note that some icons have been taken directly from OS X (Safari), while others have been completely redesigned (Calendar)

ICONS DIMENSION , from 114px to 120px

iOS7 icone
IOS 7 Guide Freebie PSD

One of the major changes (over the design “flat”) is the change of size of the icons.
The application icons are now 120px iOS 7 (compared to the previous format of 114px) and the radius (Border Radius) is now 27px (20px compared to previous year). With this change comes the need to change the icon size of our app.
Fortunately, the designers Seevi Kargwal he designed the aforementioned iOS icon 7 in PSD format to help facilitate the process of redesign. You can check out more of the work.

Following the link to this web page http://dribbble.com/shots/1111211-IOS-7-Guide-Freebie-PSD you will find 2 ATTACHMENTS:
IOS_7_Guide_freebie_PSD.psd 900 KB
IOS-7-Guide-Full-Size.jpg 300 KB

iOS 7 Icon Rounded Corner Radius

The web site “Cult of Mac” argues that Jony Ive has given to the icons of iOS7 the same rounded corners iMac.

icone iOS7
icone Home – OS7 (Source)

Cult of Mac in his article argues that the new Director of Human Interface, Jony Ive, has redesigned iOS operating system as a multi-layered Parallax.
Ive qindi migrated its design philosophies and the hardware on iOS Messages app icon shows how you can get fecendo so that the corners of the icon messages have the same tapered edges that lie on the products Apple iMac.
The difference is only a small number of pixels that most users will probably never noticed, so Brad Ellis, who first discovered it, he created his own GIF comparison so you can actually see the changes:

Le icone di iOS7 hanno la stessa curvatura degli angoli dell'iMac
The icons of iOS7 have the same curvature of the corners of the iMac Brat Ellis.

In his blog Joel Page detailing the best corner of the icon as you can ‘see in the image below:

icona_ios7_bordo_domensione_radiusSource

The new icon looks like a square with iOS7 dimesioni 120×120 px.

We conclude by considering the fact that the design changes with iOS 7 there sobno important innovations in the design of the icon.

The icons on the iPhone home screen received a slight increase in size to 114px by 57px and 60px and 120 px respectively.

 

template
We have introduced a new golden ratio grid and a new color scheme, much more ‘bright, that you will find included in the PSD file icon App Template This segiuendo link to this page:http://appicontemplate.com/ios7.
App icon Template , a Free Photoshop template is a resource that makes you more faciledisegnare icons. By changing a single object automatically returns you all the different formats required on iOS and Android.

Why video composition

You may think that video composition should be limited to applications like iMovie or Vimeo so you can consider this subject, at least from the point of view of the developer, to be limited to a niche of video experts. Instead it can be extended to a broader range of applications, not essentially limited to practical video editing. In this blog I will provide an overview of the AV Foundation framework applied on a practical example.

In my particular case the challenge was to build an application that, starting from a set of existing video clips, was able to build a story made by attaching a subset of these clips based on decisions taken by the user during the interaction with the app. The final play is a set of scenes, shot on different locations, that compose a story. Each scene consists of a prologue, a conclusion (epilogue) and a set of smaller clips that will be played by the app based on some user choices. If the choices are correct, then the user will be able to play the whole scene up to its happy end, but in case of mistakes the user will return to the initial prologue scene or to some intermediate scene. The diagram below shows a possible scheme of a typical scene: one prologue, a winning stream (green) a few branches (yellow are intermediate, red are losing branches) and an happy end. So the user somewhere in TRACK1 will be challenged to take a decision; if he/she is right then the game will continue with TRACK2, if not it will enter in the yellow TRACK4, and so on.

iPhone & iPad: Movie Game Storyboard
What I have in my hands is the full set of tracks, each track representing a specific subsection of a scene, and a storyboard which gives me the rules to be followed in order to build the final story. So the storyboard is made of the scenes, of the tracks the compose each scene and of the rules that establish the flow through these tracks. The main challenge for the developer is to put together these clips and play a specific video based on the current state of the storyboard, then advance to the next, select a new clip again and so on: all should be smooth and interruptions limited. Besides the user needs to take his decisions by interacting with the app and this can be done by overlapping the movie with some custom controls.

The AV Foundation Framework

Trying to reach the objectives explained in the previous paragraph using the standard Media Framework view controllers, MPMoviePlayerController and MPMoviePlayerViewController, would be impossible. These conrollers are good to play a movie and provide the system controls, with full-screen and device rotation support, but absolutely not for advanced controls. Since the release of iPhone 3GS the camera utility had some trimming and export capabilities, but these capabilities were not given to developers through public functions of the SDK. With the introduction of iOS 4 the activity done by Apple with the development of the iMovie app has given the developers a rich set of classes that allow full video manipulation. All these classes have been collected and exported in a single public framework, called AV Foundation. This framework exists since iOS 2.2, at that time it was dedicated to audio management with the well known AVAudioPlayer class, then it has been extended in iOS 3 with the AVAudioRecorder and AVAudioSession classes but the full set of features that allow advanced video capabilities took place only since iOS 4 and they were fully presented at WWDC 2010.

The position of AV Foundation in the iOS Frameworks stack is just below UIKit, behind the application layer, and immediately above the basic Core Services frameworks, in particular Core Media which is used by AF Foundation to import basic timing structures and functions needed for media management. In any case you can note the different position in the stack in comparison with the very high-level Media Player. This means that this kind of framework cannot offer a plug-and-play class for simple video playing but you will appreciate the high-level and modern concepts that are behind this framework, for sure we are not at the same level of older frameworks such as Core Audio.

(image source: from Apple iOS Developer Library)

Building blocks

The classes organization of AV Foundation is quite intuitive. The starting point and main building block is given by AVAsset. AVAsset represents a static media object and it is essentially an aggregate of tracks which are timed representation of a part of the media. All tracks are of uniform type, so we can have audio tracks, video tracks, subtitle tracks, and a complex asset can be made of more tracks of the same type, e.g. we can have multiple audio tracks. In most cases an asset is made of an audio and a video track. Note that AVAsset is an abstract class so it is unrelated to the physical representation of the media it represents; besides creating an AVAsset instance doesn’t mean that we have the whole media ready to be played, it is a pure abstract object.


There are two concrete asset classes available: AVURLAsset, to represent a media in a local file or in the network, and AVComposition (together with its mutable variant AVMutableComposition) for an asset composed by multiple media. To create an asset from a file we need to provide its file URL:

NSDictionary *optionsDictionary = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
AVURLAsset *myAsset = [AVURLAsset URLAssetWithURL:assetURL options:optionsDictionary];

The options dictionary can be nil, but for our purposes – that is making a movie composition – we need to calculate the duration exactly and provide random access to the media. This extra option, that is setting to YES the AVURLAssetPreferPreciseDurationAndTimingKey key, could require extra time during asset initialization, and this depends on the movie format. If this movie is in QuickTime or MPEG-4 then the file contains additional summary information that cancels this extra parsing time; but the are other formats, like MP3, where this information can be extracted only after media file decoding, in such case the initialization time is not negligible. This is a first recommendation we give to developers: please use the right file format depending on the application.
In our application we already know the characteristics of the movies we are using, but in a different kind of application, where you must do some editing from user imported movies, you may be interested in inspecting the asset properties. In such case we must remember the basic rule that initializing an asset doesn’t mean we loaded and decoded the whole asset in memory: this means that every property of the media file can be inspected but this could require some extra time. For completeness we simply introduce the way asset inspection can be done leaving the interested user to the reference documentation (see the suggested readings list at the end of this post). Basically each asset property can be inspected using an asynchronous protocol called AVAsynchronousKeyValueLoadingwhich defines two methods:

– (AVKeyValueStatus)statusOfValueForKey:(NSString *)key error:(NSError **)outError
– (void)loadValuesAsynchronouslyForKeys:(NSArray *)keys completionHandler:(void (^)(void))handler

The first method is synchronous and immediately returns the knowledge status of the specified value. E.g. you can ask for the status of “duration” and the method will return one of these possible statuses: loaded, loading, failed, unknown, cancelled. In the first case the key value is known and then the value can be immediately retrieved. In case the value is unknown it is appropriate to call the loadValuesAsynchronouslyForKeys:completionHandler: method which at the end of the operation will call the callback given in the completionHandlerblock, which in turn will query the status again for the appropriate action.

Video composition

As I said at the beginning, my storyboard is made by a set of scenes and each scene is composed by several clips whose playing order is not known a priori. Each scene behaves separately from the others so we’ll create a composition for each scene. When we get a set of assets, or tracks, and from them we build a composition all in all we are creating another asset. This is the reason why the AVComposition and AVMutableComposition classes are infact subclasses of the base AVAsset class.
You can add media content inside a mutable composition by simply selecting a segment of an asset, and adding it to a specific range of the new composition:

– (BOOL)insertTimeRange:(CMTimeRange)timeRange ofAsset:(AVAsset *)asset atTime:(CMTime)startTime error:(NSError **)outError

In our example we have a set of tracks and we want to add them one after the other in order to generate a continous set of clips. So the code can be simply written in this way:

 

    AVMutableComposition = [AVMutableComposition composition];
CMTime current = kCMTimeZero;
NSError *compositionError = nil;
for(AVAsset *asset in listOfMovies) {
BOOL result = [composition insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration])
ofAsset:asset
atTime:current
error:&compositionError];
if(!result) {
if(compositionError) {
// manage the composition error case
}
} else {
current = CMTimeAdd(current, [asset duration]);
}
}

First of all we introduced the time concept. Note that all media have a concept of time different than the usual. First of all time can move back and forth, besides the time rate can be higher or lower than 1x if you are playing the movie in slow motion or in fast forward. Besides it is considered more convenient to represent time not as floating point or integer number but as rational numbers. For such reason Core Media framework provides the CMTimestructure and a set of functions and macros that simplify the manipulation of these structures. So in order to build a specific time instance we do:

CMTime myTime = CMTimeMake(value,timescale);

which infact specifies a number of seconds given by value/timescale. The main reason for this choice is that movies are made of frames and frames are paced at a fixed ration per second. So for example if we have a clip which has been shot at 25 fps, then it would be convenient to represent the single frame interval as a CMTime variable set with value=1 and timescale=25, corresponding to 1/25th of second. 1 second will be given by a CMTime with value=25 and timescale=25, and so on (of course you can still work with pure seconds if you like, simply use the CMTimeMakeWithSeconds(seconds) function). So in the code above we initially set the current time to 0 seconds (kCMTimeZero) then start iterating on all of our movies which are assets in . Then we add each of these assets in the current position of our composition using their full range ([asset duration]). For every asset we move our composition head (current) for the length (in CMTime) of the asset. At this point our composition is made of the full set of tracks added in sequence. We can now play them.

Playing an asset

The AVFoundation framework doesn’t offer any built-in full player as we are used to see with MPMovieViewController. The engine that manages the playing state of an asset is provided by the AVPlayer class. This class takes care of all aspects related to playing an asset and essentially it is the only class in AV Foundation that interacts with the application view controllers to keep in sync the application logic with the playing status: this is relevant for the kind of application we are considering in this example, as the playback state may change during the movie execution based on specific user interactions in specific moments inside the movie. However we don’t have a direct relation between AVAsset and AVPlayer as their connection is mediated by another class called AVPlayerItem This class organizations has the pure purpose to separate the asset, considered as a static entity, from the player, purely dynamic, by providing an intermediate object, the that represent a specific presentation state for an asset. This means that to a given and unique asset we can associate multiple player items, all representing different states of the same asset and played by different players. So the flow in such case is from a given asset create a player item and then assign it to the final player.

AVPlayerItem *compositionPlayerItem = [AVPlayerItem playerItemWithAsset:composition];
AVPlayer *compositionPlayer = [AVPlayer playerWithPlayerItem:compositionPlayerItem];

 

In order to be rendered on screen we have to provide a view capable of rendering the current playing status. We already said that iOS doesn’t offer an on-the-shelf view for this purpose, but what it offers is a special CoreAnimation layer called AVPlayerLayer. Then you can insert this layer in your player view layer hierarchy or, as in the example below, use this layer as the base layer for this view. So the suggested approach in such case is to create a custom MovieViewer and set AVPlayerLayeras base layer class:

// MovieViewer.h

#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
@interface MovieViewer : UIView {
}
@property (nonatomic, retain) AVPlayer *player;
@end

// MovieViewer.m

@implementation MovieViewer
+ (Class)layerClass {
return [AVPlayerLayer class];
}
– (AVPlayer*)player {
return [(AVPlayerLayer *)[self layer] player];
}
– (void)setPlayer:(AVPlayer *)player {
[(AVPlayerLayer *)[self layer] setPlayer:player];
}
@end

// Intantiating MovieViewer in the scene view controller
// We suppose “viewer” has been loaded from a nib file
// MovieViewer *viewer
[viewer setPlayer:compositionPlayer];

At this point we can play the movie, which is quite simple:

[[view player] play];
Observing playback status

It is relevant for our application to monitor the status of the playback and to observe some particular timed events occurring during the playback.
As far as status monitoring, you will follow the standard KVO based approach by observing changes in the status property of the player:

// inside the SceneViewController.m class we’ll register to player status changes
[viewer.player addObserver:self forKeyPath:@”status” options:NSKeyValueObservingOptionNew context:NULL];

// and then we implement the observation callback
-(void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context {
    if(object==viewer.player) {
        AVPlayer *player = (AVPlayer *)object;
        if(player.status==AVPlayerStatusFailed) {
      // manage failure
        } else if(playe.status==AVPlayerStatusReadyToPlay) {
      // player ready: manage success state (e.g. by playing the movie)
        } else if(player.status==AVPlayerStatusUnknown) {
      // the player is still not ready: manage this waiting status
        }
    }
}

Differently from the KVO-observable properties timed-events observation is not based on KVO: the reason for this is that the player head moves continuously and usually playing is done on a dedicated thread. So the system certainly prefers to send its notifications through a dedicated channel, that in such case consists in a block-based callback that we can register to track such events. We have two ways to observe timed events:

  • registering for periodic intervals notifications
  • registering when particular times are traversed

In both methods the user will be able to specify a serial queue where the callbacks will be dispatched to (and it defaults to the main queue) and of course the callblack block. It is relevant to note the serial behaviour of the queue: this means that all events will be queued and executed one by one; for frequent events you must ensure that these blocks are executed fast enough to allow the queue to process the next blocks and this is especially true if you’re executing the block in the main thread, to avoid the application to become unresponsive. Don’t forget to schedule this block to be run in the main thread if you update the UI.
Registration to periodic intervals is done in this way, where we ask for a 1 second callback whose main purpose will be to refresh the UI (typically updating a progress bar and the current playback time):

// somewhere inside SceneController.m
id periodicObserver = [viewer.player addPeriodicTimeObserverForInterval:CMTimeMakeWithSeconds(1.0) queue:NULL usingBlock:^(CMTime time){
[viewer updateUI];
}];
[periodicObserver retain];

// and in the clean up method
-(void)cleanUp {
[viewer.player removeTimeObserver:periodicObserver];
[periodicObserver release];
}

// inside MovieViewer.m
-(void)updateUI {
// do other stuff here
// …
// we calculate the playback progress ratio by dividing current position of playhead into the total movie duration
float progress = CMTimeGetSeconds(player.currentTime)/CMTimeGetSeconds(player.currentItem.duration);
// then we update the movie viewer progress bar
[progressBar setProgress:progress];
}

 

Registration to timed events is done using a similar method which takes as argument a list of NSValue representations of CMTime (AVFoundation provides a NSValue category that adds CMTime support to NSValue):

// somewhere inside SceneController.m
id boundaryObserver = [viewer.player addBoundaryTimeObserverForTimes:timedEvents queue:NULL usingBlock:^{
[viewer processTimedEvent];
}];
[boundaryObserver retain];
// inside MovieViewer.m
-(void)processTimedEvent {
// do something in the UI
}
In both cases we need to unregister and deallocate somewhere in our scene controller the two observer opaque objects; we may suppose the existence of a cleanup method that will be assigned this task:
-(void)cleanUp {
[viewer.player removeTimeObserver:periodicObserver];
[periodicObserver release];
[viewer.player removeTimeObserver:boundaryObserver];
[boundaryObserver release];
}

While this code is the general way to call an event, in our application it is more appropriate to assign to each event a specific action, that is we need to customize each handling block. Looking at the picture below, you may see that at specific timed intervals inside each of our clips we assigned a specific event.


The figure is quite complex and not all relationships have been highlighted. Essentially what you can see is the “winning” sequence made of all green blocks: they have been placed consecutively in order to avoid the playhead jumping to different segments when the player takes the right decisions, so the playback will continue without interruption and will be smooth. With the exception of the prologue track, which is just a prologue of the history and no user interaction is required at the stage, and is corresponding conclusion, simply an epilogue when the user is invited to go to the next scene, all other tracks have been marked by a few timed events, identified with the dashed red vertical lines. Essentially we have identified 4 kind of events:

  • segment (clip) starting point: this will be used as a destination point for the playhead in case of jump;
  • show controls: all user controls will be displayed on screen, user intercation is expected;
  • hide controls: all user controls are hidden, and no more user interaction is allowed;
  • decision point, usually coincident with the hide controls event: the controller must decide which movie segment must be played based on the user decision.

Note that this approach is quite flexible and in theory you can any kind of event, this depends on the fantasy of the game designers. From the point of view of the code, we infact subclassed the AVURLAsset by adding an array of timed events definitions. At the time of the composition creation, this events will be re-timed according to the new time base (e.g.: if an event is played at 0:35 seconds of a clip, but the starting point of the clip is exactly at 1:45 of the entire sequence, the the event must be re-timed to 1:45 + 0:35 = 2:20). At this point, with the full list of events we can re-write our boundary registration:

// events is the array of all re-timed events in the complete composition
__block __typeof__(self) _self = self; // avoids retain cycle on self when used inside the block
[events enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
TimedEvent *ev = (TimedEvent *)obj;
[viewer.player addBoundaryTimeObserverForTimes:[NSArray arrayWithObject:[NSValue valueWithCMTime:ev.time]]
queue:dispatch_get_main_queue()
usingBlock:^{
// send event to interactiveView
[viewer performTimedEvent:ev];
[_self performTimedEvent:ev];
}];
}];

 

 

As you can see the code is quite simple: for each timed event we register a single boundary which simply calls two methods, one for the movie viewer and one for the scene controller; in both cases we send the specific event so the receiver will know exactly what to do. The viewer will normally take care of UI interaction (it will overlay a few controls on top of the player layer, so according to the events these controls will be shown or hidden; besides the viewer knows which control has been selected by the user) while the scene controller will manage the game logic, especially in the case of the decision events. When the controller finds a decision event, it must move the playhead to the right position in the composition:

 

 CMTime goToTime = # determines the starting time of the next segment #
[viewer hide];
[viewer.player seekToTime:goToTime toleranceBefore:kCMTimeZero toleranceAfter:kCMTimePositiveInfinity completionHandler:^(BOOL finished) {
if(finished) {
dispatch_async(dispatch_get_main_queue(), ^{
[viewer show];
});
);
}];

 

What happens in the code above is that in case we need to move the playhead to a specific time, we first determine this time then we ask the AVPlayer instance to seek to this time by trying to move the head in this position or after with some tolerance (kCMTimePositiveInfinity) but not before (kCMTimeZero in the toleranceBefore: parameter; we need this because the composition is made of all consecutive clips and then moving the playhead before the starting time of our clip could show a small portion of the previous clip). Note that this operation is not immediate and even if quite faster it could take about one second. What happens during this transition is that the player layer will show a still frame somewhere in the destination time region, than will start decoding the full clip and will resume playback starting from another frame, usually different than the still one. The final effect is not really good and after a few experimentation a decided to hide the player layer immediately before starting seeking and showing it again as soon the player class informs me (through the completionHandler callback block) that the movie is ready to be played again.

Conclusions and references

I hope this long post will push other developers to start working on interactive movie apps that will try to leverage the advanced video editing capabilities of iOS other than for video editing. The AVFoundation framework offers us very powerful tools and which are not difficult to use. In this post I didn’t explore some more advanced classes, such as AVVideoComposition and AVSynchronizedLayer. The former is used to create transitions, the latter is use to synchronize core animation effects with the internal media timing.

Great references on the subject can be found in the iOS Developer Library or WWDC videos and sample code:

  • For a general overview: AVFoundation Programming Guide in the iOS Developer Library
  • For the framework classes documentation: AVFoundation Framework Reference in the iOS Developer Library
  • Video: Session 405 – Discovering AV Foundation from WWDC 2010, available in iTunesU to registered developers
  • Video: Session 407 – Editing Media with AV Foundation from WWDC 2010, available in iTunesU to registered developers
  • Video: Session 405 – Exploring AV Foundation from WWDC 2010, available in iTunesU to registered developers
  • Video: Session 415 – Working with Media in AV Foundation from WWDC 2011, available in iTunesU to registered developers
  • Sample code: AVPlayDemo from WWDC 2010 sample code repository
  • Sample code: AVEditDemo from WWDC 2010 sample code repository

 

Writed by Carlo Vigiani

When you decide to design an app you must always follow the basic principles of industrial design.
Many people think of this commissioning an app, but when you are describing the application and then how their idea can be translated by the user experience and graphic interface (User Interface & User Experience), they are unprepared and very often they hide behind phrases like “I do not know this is a job for engineers, lets see this to the technicians …see ye.”

Needless to say,  when the  “technicians” get to work these people, who have no concept to delegate, will begin to demand substantive changes giving advice and information of any kind and almost always only after the app came to final stage of its development.
And the well-known concept that the “technical”, and engineers, first build the core of the application and then they adapt the design and  they do the opposite, in spite of themselves, only if the commitment are valid and convincing, especially when this is decided right from beginning of the design.
In accordance with the approach of “you do that then we see”, wanted by professionals distracted and ill prepared, the final aesthetic result can be poor which seem as every engineer knows that, before starting to write code, you need to have clear UI principles with a description of the functions related to the experience of the user.

Some sophists can criticize me for using the word “user”, which sometimes is not very appealing if you think that the end users are just people, or individuals users. This difference in meaning of words is very clear to me, but for ease of communication and especially for translation needs prefer to use the word “user” or “users” instead of ‘”individual”.

 

10 principles for a good design of an app and for a product

First of all, to quote Steve Jobs, I propose a definition of design that has convinced me more:
“Design is the fundamental soul of a man-made creation that ends up expressing itself in successive outer layers of the product or service.”

Of course, the same principles Jobs was inspired by Dieter Rams, former Braun’s designers, who enumerated his 10 principles for good design of a product:

 

Dieter Rams e i suoi prodotti di design

 

  • Dieter Rams Ten Principles of “Good Design”
    1. Good Design Is Innovative : The possibilities for innovation are not, by any means, exhausted. Technological development is always offering new opportunities for innovative design. But innovative design always develops in tandem with innovative technology, and can never be an end in itself.
    2. Good Design Makes a Product Useful : A product is bought to be used. It has to satisfy certain criteria, not only functional but also psychological and aesthetic. Good design emphasizes the usefulness of a product while disregarding anything that could possibly detract from it.
    3. Good Design Is Aesthetic : The aesthetic quality of a product is integral to its usefulness because products are used every day and have an effect on people and their well-being. Only well-executed objects can be beautiful.
    4. Good Design Makes A Product Understandable : It clarifies the product’s structure. Better still, it can make the product clearly express its function by making use of the user’s intuition. At best, it is self-explanatory.
    5. Good Design Is Unobtrusive : Products fulfilling a purpose are like tools. They are neither decorative objects nor works of art. Their design should therefore be both neutral and restrained, to leave room for the user’s self-expression.
    6. Good Design Is Honest : It does not make a product more innovative, powerful or valuable than it really is. It does not attempt to manipulate the consumer with promises that cannot be kept
    7. Good Design Is Long-lasting : It avoids being fashionable and therefore never appears antiquated. Unlike fashionable design, it lasts many years – even in today’s throwaway society.
    8. Good Design Is Thorough Down to the Last Detail : Nothing must be arbitrary or left to chance. Care and accuracy in the design process show respect towards the consumer.
    9. Good Design Is Environmentally Friendly : Design makes an important contribution to the preservation of the environment. It conserves resources and minimises physical and visual pollution throughout the lifecycle of the product.
    10. Good Design Is as Little Design as Possible : Less, but better – because it concentrates on the essential aspects, and the products are not burdened with non-essentials. Back to purity, back to simplicity.

 

Of course it is easy to see that these principles will be adapted to the design of industrial products, but also for the design of applications, especially if they will be used on products that were built precisely according to the principles of good industrial design, as are all products Apple.

Design better, work less

Dieter Rams, creator of the 10 principles, has always expressed his approach to design with the phrase: “Weniger, aber besser” or “Less, but better.”
Minimalism, as well as being very elegant, is certainly the best way to allow all users-users to understand instinctively the product and its functionality and it makes the product itself, or the App, friendly to use (user friendly) and “pure”.

Heuristic evaluation

At this point I can only describe even the so-called heuristic evaluation.
The Heuristic Evaluation is a method of inspection that is performed exclusively by the experts of usability and allows to evaluate whether a set of general design principles have been applied correctly in the UI.
The guidelines (“Ten Usability Heuristics“) upon which this sort of evaluation were developed in 1990 by Jakob Nielsen and Rolf Molich and are designed for desktop software, but in this case, these principles are still valid for designed for touchscreen applications, such as the iPhone OS App for iPhone and iPad app for Android and Windows Mobile.

 

With the heuristic evaluation is detected then the fidelity and adherence to the principles of usability of the product, you can find on  Wikipedia ( http://en.wikipedia.org/wiki/Usability )

This method, which as we said, is a type of inspection, provides the only involvement of usability experts and does not call into question the end-users: for this reason it is easy to perform, cheap and fast but does not take into account the possible evolution of the needs of public and therefore, in my humble opinion, is certainly very useful but if it owns it in the limit of being inflexible, and the lack of flexibility can usually castrate the creative evolution.

The heuristic evaluation test , therefore consists in a series of navigation of the product which are carried out separately by each “expert”. During the test use, the software product is evaluated for both static aspects of the interface, such as window layouts, labels, buttons etc.., And for the dynamic aspects of interaction and (logical processes and flows).
After finishing the investigation, experts will gather in brainstorming, check the results and compare them with the principles provided in the guideline to reach some common conclusions.

Conclusions

The heuristic evaluation method is certainly very useful and often necessary, but it can also be done instinctively , if the “expert” who heads the app is an old business guru.

I doubt that when you follow these methods, very hard, is that you can easily fall in the risk assessments of caging in a bureaucratic system – with its sculpted rules – which severely limits the creative people, as suggested by the same creator iPhone and iPad, “think Different”.

Think Different is in fact always been the key to the success of each product in each sector.

Obviously none of the great success stories, “Think Different” model-based , has never ignored the existence of principles that Nielsen is one of the cultural foundations of this industry.
We must never ignore the basics, but even being locked in a few principles, how big and important they are, if you want to try to be innovative and revolutionary.

HTML(5) Approach
The final technique is something that is emerging now, especially thanks to the great improvements in term of stability and speed introduced by the latest version of iOS for the in-app web views. A couple of good examples of this approach are the Ars Technica app (link) and the Bloomberg Businessweek+ magazine (link).

The concept is quite simple: html and css are common and powerful techniques to layout a page on screen: why not leverage the skills developed by many web designers to make a magazine that perfectly fits with the iPad?
The core block at the base of this approach is the UIWebView Cocoa Touch object: with this view we can load any kind of html document, loaded locally or remotely, and layout it in the page at an adequate speed (but not the fastest) and without surprises. Besides we can get rid of the overlay
technique, as the web view is capable of displaying images, playing movies and of course execute javascript based widgets. Also this component provides a two way interaction between the javascript world and the objective-c runtime (and in fact this justifies the existence of extension languages
such as Objective-J, provided with the Cappuccino framework: http://cappuccino.org/). Finally the web view is highly respondent to user interactions, and some features like text selection and dictionary lookup come for free.
The open-source world is highly active in this area: projects like Baker (www.bakerframework.com), Siteless (www.siteless.org), Laker (www.lakercompendium.com) and pugpig (pugpig.com) make publicly available this kind of solution.

Sincerely we don’t know if this will be the final solution for everybody. Of course a publisher that already invested in setting up a web site (but not in Flash!), and this is quite common between newspapers, will be able to port most of the layout and contents to the iPad, and sometimes this can
be achieved with an adaption of the CMS output views to provide files that can be easily fed to the app.

Careful must be given to don’t push this behavior at its extremes: don’t forget in fact that web page rendering requires an inner engine and at the end any intermediate layer will require resources and extra time. Sometimes, and this is particularly evident with the first generation of the iPad, content
updates following user interaction are not very reactive. So it is not recommended to transform every single aspect of the magazine app into web based content: clearly in this way you’re helping all javascript developers not skilled with objective-c, but
a performance penalty will be visible.

As an example, the toolbar typical of all magazine apps used to access extra features (sharing, table of contents, home page, etc.) should always be done using the native Cocoa Touch component and not an html+css solution.

However if the publisher accepts to convert his design flow to a web based one and you, as developer, prefer to base your work on consolidated and easy to manipulate methodologies, this one should be your first choice to be taken in consideration.

Conclusions
We hope this article gives a good overview of the major techniques used to render pages in a magazine, newspaper or e-book. It could be we have not mentioned some technique we’re not aware of, in such case dear reader any feedback from you is welcome!

About the author: Carlo Vigiani
He is an electronics engineer and software developer, located in Italy. He is CTO and co-founder of new startup i3Factory.com, active in the development of iOS, Android and Win Mobile apps, with special focus on publishing, tourist and music apps.

Source: www.icodermag.com

01/2012

Pages Pre-Rendered by images
This technique is heavily used inside the highly interactive magazines published using the Adobe Digital Solutions environment: well known examples are the Condé Nast magazines (Wired is one of the most famous examples).
The way these magazines are implemented starts with the well known suite of Adobe Digital Publishing tools, In Design in primis. These tools are used by many publishers around the world and the latest versions offer the pos sibility to export the project, other than in the ubiquitous pdf format, in a package suited for distribution through iPad. The output of these files can be tested using the free app Adobe Content Viewer downloadable from the App Store, but of course the final branded app, together with the server infrastructure required to serve the contents, requires a higher tier license.

What characterizes this kind of magazines is that at the moment of project creation all pages are pre-rendered as jpeg or png images and then special effects are overlaid.
This means that the core section of the magazine reader is essentially an image viewer. Sure these images will span an area slightly larger than the iPad screen, so they will be embedded inside a scroll view, but they are still images. All in all technically the choice is not bad: the iPad is quite better in rendering images than PDF files, as the required calculations needed to transform the pdf data in bitmaps is completely skipped here, while the CPU will just need to decompress the image and send it to the graphics hardware. Exactly as we did in the PDF case, we can apply the overlay technique to over impose somecontent that requires user interaction on top of the bottom rendering layer.

While this technique is highly efficient from the point of view of rendering time, and is simple to implement as all the page layout complexities have been taken into account and solved by the desktop publishing tools, it offers a few limitations that need to be considered:

•     every single page takes quite more space on disk and download time of this kind of magazines is increased correspondingly; in comparison with a pdf page, the space taken is much more as every pixel of text must be provided in the file and we cannot force high compression ratios if we don’t want to introduce blurring in the text. The pdf page, especially those pages made of text only, is much lighter as the text is not pre-
rendered.

•     zooming or font resizing is not feasible: both pdf and core text redraw the text using vectorial algorithms or per-size font representations, this is not possible to achieve on a static image. This means that the magazine needs to be drawn with specific fonts types and sizes, fonts which are well suited for jpeg compression (no blur) and the screen resolution (132 dpi, not so high; things will be better with the next retina display iPad!)

•     text search, highlight and selection is impossible, unless the digital publishing tool exports together with the pre-rendered pages a full map of text coordinates, something I haven’t seen yet!

Adobe is not alone in publishing this kind of magazines:
there are several custom apps in the market that follow exactly the same approach. It’s not bad but is not leveraging the great publishing frameworks that Apple is offering to its developers. And it has too many limitations if compared with other techniques. For sure a publisher that is mastering the digital publishing tools I mentioned before can take advantage of this approach, as the final quality is undoubtable and the time to market is the shorter, and at the same time allows to provide a content suited for the iPad, and not just a pdf fit on screen.

But I would recommend to all developers that are making custom products and are not using specialized page composition tools to stay away from such methodology.

Source: www.icodermag.com

01/2012

 

CORE TEXT RENDERING
Core Text (short: CT) is another of those technologies developed for the Mac and later ported to iOS.
The Core Text framework is dedicated to text layout and font handling. Just to summarize the capabilities of this framework, consider that is at the base of the desktop publishing revolution that made the Mac famous in this professional sector.
As CG, even CT has a C-based API, even if there are several third-party open source wrappers that pack together the most common functionalities in a high-level Objective-C interface.

CT should not be used to replace web based rendering based on html and css, this is a too complex field that is better to leave to dedicated system components such as then UIWebView instead it can be used to efficiently render some rich text.

CT talks with CG, in fact text rendering is done at the same time of view Quartz based rendering. The two APIs have similar conventions and memory management rules, so the developer already accustomed with Core Foundation programming model will not find an hurdles in understanding the CT API. This gives the possibility to the developer to eventually mix the text rendering and image drawing at the same rendering stage (CT is limited to text only, it has no image drawing capabilities).

The main reason to use Core Text is because it does direct rendering of text on page without any intermediaries. It differs from PDF which consider each page as a whole, it differs from web based techniques as there is no intermediate language (html) or layout interpretation (css) in between, you can write directly on the page. The basic components behind CT are layout objects such as “runs”, which are direct translation of characters
into drawable glyphs, “lines” of characters and “frames”, which correspond to paragraphs. The translation of characters to glyphs is done by “typesetters” and the text to be plotted is provided using attributed strings, which are common strings enriched with attribute informations (font size, color, ornaments).

You will decide to use Core Text for a magazine whose layout will be mostly based on text with standard layout, so it fits well for newspapers also. Probably it’s not the best choice for glamour magazines where graphics layout is changing on every page and could be quite complicated.
A clear advantage of the Core Text based solution is that you don’t need to apply the overlay technique we mentioned in the paragraph dedicated to pdf. With CT you will directly divide your page in frames and each of these frames will contain text (rendered by CT) or multimedia. Essentially you can define the page layout by selecting a size (it can fit the iPad screen or it can be vertical or horizontal scrolling page), then you will decide the size and position of media content in this page and finally you will define the frames (several rectangular frames) that will contain the text. The text frames organization can be of any kind, from compact single column structures, two multicolumn layout or varying size frames. Inside the frames you will render the text and Core Text will help you to manage line breaks for these paragraphs. Then you can easily provide the user the possibility to change font type and size and the same rendering code can be reused to quickly rearrange the text inside the frames.

The page layout representation can be provided in any form decided by the developer together with the publisher, the best choice will be XML (all in all it’s the base of any markup format!) and it will be shipped to the app together with the texts (still XMLs) and the assets in a zip file package.
One limitation of Core Text is that it is a text drawing technology and is not optimized for editing (but we don’t need it at this stage) and user interaction. This means that if we want to provide text highlight or select and copy features we’ll need to implement them by our own; the framework provides us some APIs to facilitate this task but in any case the code to implement these functionalities must be written by the developer to manage every single detail. In any case all these tasks will be greatly simplified in comparison with PDF: here you have full control of the text and its position of screen, while pdf is still an opaque entity hidden behind a complex data structure that you cannot control in its entirety.

Our recommendation is that if you must implement a digital magazine, without extreme layout requirements, some multimedia content and a fast and powerful control of text, using Core Text is the first technology choice to be considered.

An excellent tutorial on the subject is available at this link on Ray Wenderlich blog: http://www.raywenderlich.com/4147/how-to-create-a-simple-magazine-app-with-core-text

Source: www.icodermag.com

01/2012

 

The Magazine is a PDF File
You may like it or not, but should your software house be committed to develop a magazine iPad app, the magazine will be with high probability given to you as a PDF file. As there is no way to “escape” from it, at the end you will need to develop your own pdf reader or integrate some free or  commercial external library.
The reason why pdf is still the dominant format in the e-publishing world is clear: most of the publishers are porting their existing printed  publications on the iPad, and for obvious budget reasons they want to reuse all the investment done in the creation of their issues. You will not be able to escape from the pdf format dictatorship with the exception of two cases: the publication is brand new and only digital, so there are no previous investments to drive the final choice, or the publisher has large budgets and/or is a strong user experience (UX) believer and accepts to allocate the extra budget to recreate a different format for its publications. Both cases are not so uncommon with those publishers that already did the effort to bring their products to the web (with the notable exception of those that did it in Flash!), but the large part of the small and medium publishers will
still be locked to the pdf format.

Unfortunately the pdf is not the best way to port a magazine in the iPad. And this for several reasons:

•     printed magazines page size is usually larger than the iPad screen: this means that when the page fits to the screen, all characters appear smaller and then something readable in the printed paper could become unreadable without zoom; but zoom is not always efficient and in particular it’s not loved by readers that may lose their “orientation” inside the page.

•     printed magazines pages have not the same aspect ratio of the iPad screen: this means that a page that fits in the screen will be bordered by top/bottom or left/right empty stripes.

•     often printed page layouts are optimized for facing pages, e.g. a panorama picture which is spread between two pages; when the device is kept in portrait orientation, these graphical details will be lost, instead if the device is kept in landscape you will be able to appreciate the two-pages layout but characters will be too small to be read comfortably.

•     as these files are not optimized for digital, normally the outlines (table of contents) and annotations (links to pages or external resources) are not exported; this means that even if your pdf reader code is aware of this information, in the majority of cases it is not available and then you will need to define a different way to provide it.

•     the official pdf format supports multimedia content; unfortunately the iOS is not able to manage it, so all interactive content must be provided  outside the pdf file.

The page rendering is achieved in iOS (and OSX too) through the Quartz 2D API, provided within the Core Graphics framework (shorted with CG). Quartz 2D is the two-dimensional drawing engine on which are based many (but not all) of the drawing capabilities of iOS. The
PDF API is a subset of the huge CG API. This API is “old fashioned” and is not based on Objective-C but on pure
old C; besides all memory management rules will follow the Core Foundation (CF) rules which are different from Obj-C one: this means that special attention must be provided to avoid memory leaks, as each PDF page manipulation can take several megabytes and leaks will easily trigger the memory watchdog, thus force quitting your app.

be immediate to render a PDF page, by following these basic steps:

1. get the CG reference to the pdf page to be drawn;
2. get the current graphics context for the view that will contain the page;
3. instruct Quartz to draw the pdf page to the context.

As you can see, apart the required steps needed by the drawing model of Quartz, the full rendering is accomplished by the system and you don’t need to have any knowledge of the data format of a pdf file. So for you the pdf rendering processor is just a black box, and this is clear when you
see that all CG data structures are in fact opaque and their inner contents can be accessed only via API.
But a valid pdf magazine reader cannot limit itself to rendering, so you will be required to support zoom. Now as your maximum zoom level can be theoretically very high (don’t forget that characters in the pdf file are like fonts in the computer, they will never lose in precision even for
extreme zoom-ins), it is impossible to render the full zoomed page in a canvas much higher than the device screen:
here we have pixels, not vectors, and it would be immediate to crash the app because all the memory has gone away for one page only. So you will be forced to introduce tiling techniques that will limit the effective rendering to the visible part of the page, not always an easy task.

More difficult is document parsing: this is required if you want to extract outlines, annotations, do some text search and highlight. In such case apart a few meta data extraction functions, what the API gives you is a set of functions that will allow you to explore the data structures inside the document. You will not be able to get any information from the file if you don’t explore the data tree correctly and if you don’t follow the specs of the PDF document.
This is worsened by the many versions the PDF specs got in the years and by the fact that many publishers still use old software that exports the  content in the old formats.
I have developed a general purpose PDF explorer, this was part of a commitment of a client that asked me to develop a general purpose PDF reader; but as it is really hard to apply all the specs of the PDF official reference, my suggestion is to concentrate on the most used features and test them with many documents. As I said before, CG navigates the data tree but it doesn’t interpret it for us!

The last section of this part, long explanation but required given the importance of the topic, is how to provide multimedia content on top of a PDF file: all in all the iPad is a so versatile device that we cannot limit ourselves to simple page rendering. By adding extra content to the printed page you can leverage the device characteristics and still taking benefit on the investment done in the magazine creation.

There are many reasons to justify this choice: e.g. a printed advertisement can offer a video instead of a static picture, or a printed link to a web page can be replaced by an active link to a web view, or finally we can show the current weather using an html5 widget. As I previously said it is not recommended to introduce all this content inside the pdf file: it will not be rendered by Quartz and you will still be forced to traverse the data tree to extract the CG object reference for further manipulation. Finally not all publishers are aware of these functionalities or their digital publishing software is too old to fully support them.

So the best solution is based on the “overlay technique”.
This methodology consists in representing the pages in two layers:

•     the bottom layer (“rendering layer”) will contain the PDF rendering, so it will contain the bitmap image of the page;
•     the top layer (“overlay layer”) will draw all overlays and is sensible to user touches.

The overlay layer is typically made of UIKit components, so we’ll add a UIWebView for html widgets, we’ll introduce a UIScrollView to display a gallery of sliding images, or we’ll add a Media Player view for video execution. Typically the overlay descriptions are provided on a separate file, e.g. an xml, json or plist, and they will be packed together with the pdf file and all assets (movies, images, html files, music
files) in a zip file.
The app will download the zip file, will unpack it and then for each page it will use the pdf page to fill the rendering layer, and the overlay information associated to that page to build the overlay layer.
Note that this technique can be applied also in the other rendering techniques we’ll talk about in the next paragraphs, in such case it allows to overcome many of the pdf format limitations. The major requirement for the deve loper is to define a suitable format, follow all page zoom
and rotations with a corresponding overlay transformation and finally provide the publisher with the instruments and
guidelines required to easily create such overlays.

source: www.icodermag.com

01/2012

 

This article was written to our CTO, Carlo Vigiani, for iCoder magazine

One of the great improvements in all iPad owners lifestyle is the possibility to bring everywhere any sort of magazine or book, thanks to the screen size and the device light weight which both facilitate reading and carrying. In particular reports demonstrated that in a printed publications decreasing market there is a huge increase in the number of subscriptions to the digital versions of the same product (the interested reader can read this report from MPA: http://www.magazine.org/association/press/mpa_press_releases/mag-mobile-reader-study.aspx)

Apple is following this trend with great interest, and this is quite clear if we take a look at the evolution of the iOS features that have been introduced since the release of the version dedicated to iPad, that is 3.2.
In the particular the milestones that have been reached are three, shared between three major releas es of the operating system:

•     iOS 3.2 was enriched by the CoreText framework, a technology dedicated to rendering text on display available since long time on Mac OSX and  never ported in the earlier versions of the iPhone OS.

•     iOS 4.x introduced the concept of auto-renewable subscriptions, as an addition to standard non consumable In App Purchases; this feature has been introduced after long discussions between Apple, that applies the 30% commission on every In App sale and forbids any other external cheaper store access within its devices, and the publishers looking for customer fidelity techniques.

•     finally iOS 5.0 added the Newsstand feature, which provides a central place to collect all magazine and newspaper apps and at the same time provide night-time content push to all subscribers, letting them to immediately read the latest issues of their publications and saving them for the extra time (sometimes long) required for the download.

What Apple didn’t provide instead is a common and unique developer platform dedicated to the creation of apps dedicated to the magazine consumption. This lead to a lot of initiatives dedicated to help publishers to enter in the iPad market with their own magazines. These initiatives were taken by major and well known companies, such Adobe with its Digital Publishing business, and a lot of many start-
ups, everyone with its own solution.

As I said, Apple doesn’t provide a unique solution, but developers have the availability of a set of frameworks and techniques, with different levels of complexity, that provide different way of representing the page on the screen.
There is not an optimal choice, as the final decision needs to take care of aspects that go beyond pure technical considerations.
In this article we will try to depict these solutions mainly from the app developer point of view, but will never forget to enumerate the pro and cons that can affect the publisher decision on which technology to adopt.

Page rendering overview
We assume that you, the developer, are in a certain point of your app development where the magazine has been purchased, downloaded and it’s ready to be read. Your document data at this point is safely stored in the device file system and it can be represented by a single pdf file, or a collection of html and css files or a directory containing assets of different formats, such as images, videos, html5 widgets, text files. You’re now facing the problem of taking one page (which can extend beyond the screen boundaries) and presenting it in the empty space of your UIView dedicated to the
page rendering.

In the next post I will present the following methodologies to achieve this result:

•     pdf document rendering
•     pre-rendered image display
•     free format CoreText rendering
•     web based approach

01/2012 – source: www.icodermag.com

 

The platform i3F editorial rests on four foundation pillars:

  1. Development and reading documents in PDF format
  2. The use of web services and network queue
  3. The infrastructure for the Apple In App Purchase
  4. Web services using the editor

Let’s see in detail.

PDF Reader

The basis of the reading of the documents is the PDF Reader. To understand the work that is behind this technology, we begin by making reference to that IOS provides its developers.
PDF support and ‘native support within the framework Quartz, the 2D graphics framework installed in Mac OS X and successfully brought in IOS. To understand the importance of PDF in operating systems from Apple, suffice it to say that PDF is not ‘seen as any output format, but in fact any view graphics in Mac OS X and IOS can’ be reproduced as a PDF, which in fact turns out to be the prince format for printing based on Quartz. This explains why ‘is in Mac OS X support for PDF in Ios, both inbound and outbound, and’ natural and does not require the installation of external software (as is the case in Windows, where the input requires ‘installation of Adobe Reader, the output requires the use of appropriate plug-in).

That said, this’ does not mean that things are easy. In fact, the support provided by iOS is essentially limited to the possibility ‘to’ read ‘and understand a PDF file, but only to the functions of rendering on paper or video, while the interpretation of all other data (outline, thumbnails, annotations, etc. .) and ‘left to the programmer.
The PDF reader that comes with the app produced by i3F Editorial and ‘continuous work in progress, subject to continuous improvements and support of new possibilities’. Currently it offers the following features’ base:
– Support for iPhone and iPad
– Fast rendering of the page in portrait format (horizontal) and landscape (vertical, two-page spread), with caching for faster performance
– No limit on the number of pages supported (or at least no limit in addition to any provided by the platform IOS)
– Completely based on the rendering engine of IOS, so no surprises in the transition of the application across several operating system versions
– Loading thumbnails in multi-threading
– Mini-thumbnail (style iBooks)
– Function scrubbing of the page (with display of page number and / or thumbnail)
– Pre-loading outline (table of contents with all its hierarchical structure) and the comments (link)
– Intra-document annotation (jump page), and to external links
– In addition to standard support of external links (http:and mailto: within the application, any other url patterns to other app installed by the user, eg skype:) support links owners to stream video directly by the JPA and for displaying photo galleries, and this ‘means that the publisher will be’ able to create multimedia packages (PDF + multimedia) simply by defining the links within its graphical tool that generates the PDF, without having to weigh and implement all the technical complications due to the inclusion of media files directly into PDF (Recall that Quartz does not support these types of files, effectively ruling out this information.) Support for these links and ‘gradually increasing.
– Search function tests in multi-threading (ie the use of the document is not locked)
– Save the last page viewed (auto-bookmarks)

Library management

The interface defined by the library i3F Editorial is based on some commonly accepted standard templates (eg shelf covers iBooks style, window style or covers in the App Store), although i3F Editorial is a software package but a team of designers and developers able to implement any request is made by the user. The iPad and ‘a formidable creative platform, so they are well liked editors who want to bring this creativity’ even within applications.
What is ‘common hand and’ the technical aspect behind this approach. Currently, the composition of the publisher of the archive, and library users, are contained in files (in various formats, depending on the complexity ‘of the archive, ranging from XML to JSON actual SQLite database to , the most ‘complex). The files can be managed entirely by the editor through our web platform and can be installed on servers hosted by the owners or i3factory (which in turn uses deemed reliable provider ‘). At any time the publisher and ‘capable of varying the composition of its stock and make it instantly with a single click, after pre-testing always possible publication through our applications.
Once the file is defined, the application tries to start the last update was available. From this moment on, each publication can ‘overcome several states in the shop (if paid), to download (for free or if already’ purchased) to be installed, to read. The passage of these states and ‘carried out safely and fully multi-threaded: that’ means that you can ‘interact with the application (or even suspend it if the download) without having to wait for the download of the entire package. In addition, the media can be packaged into a single file – in this case, however, ‘the download time will be’ more ‘long – or can be downloaded on demand (this is where the media are considered optional to the use of the product).

In App Purchase

Following the latest news ‘introduced by Apple in the rules for approval of applications and content for a fee, we provide as a single solution for the purchase of In App Purchase, namely the ability’ to purchase publications via App Store (remember that these transactions Apple holds 30% of the price charged to the customer).
In any case i3factory and ‘willing to provide solutions to purchase additional wholesale App Purchase, based on external websites provided by the publisher. These solutions are not provided as standard and should be agreed from time to time, this peche ‘need’ from time to time consider all the complex ‘due to the security of transactions and payments made via the web. I3F does not make any kind of support on different forms of payment from the App Store, which will then be dealt with by the customer: in such cases will deal i3F ‘integration within the application only.

Web services for the publisher

The publisher will be ‘able to manage the archive of publications via web interface, based on standard Web 2.0 methodologies. This interface will allow ‘:
– The inclusion of new publications
– The definition of links and multimedia content loading
– Archive management (edit, delete, categorize and tested)
– The ability ‘to temporarily remove from sale certain publications
– Anti-hacking encrypted transactions
Recall that due to the presence of In App Purchase, sale prices and any content will be replicated within the service iTunesConnect Apple. By regulation we can not provide solutions that automate this task.
The entire package will be ‘provided in self-installing package based on PHP and requires the minimum server software support and which we believe is standard in the grand total’ of the equipment required to a publisher.

Dear Publishers,

finally we made a system that allows you to publish magazines, books, newspapers,catalogs or any publication at no cost to each new issue or for every new player.

We cater to small publisher as the major publishing house, after having tested our prototypes, and after more than one year of development, i3Factory® is pleased to introduce a software system that allows you to publish your own issues without expensive investments on the App Store .

Through Apple’s App Store, Android Market or Amazon App Store,  your audience market will become the world’s online market, then the possibility of reaching readers around the world.

The costs of printing paper are more and more high and not allow the publisher of large print runs, and then plan to reach a geographically more wide.

With our publishing system, the costs of printing are canceled; readers browse your publication on the iPad tablet (and iPhone) and the cost for new publications will be always null.

We note that the experience of reading a magazine on the iPad and far more satisfactory experience of reading the same publication on paper.

 

SOME FEATURES

  1. Your Own Universal Application will be published on Apple® App Store;
  2. Unlimited publications from PDF files;
  3. No infrastructures costs: Host the publications on your own Internet or Intranet servers. Have 100% control and autonomy on your content;
  4. Offer your readers & audience the best mobile/tablet browsing experience with high definition texts and images, Videos and so much more;
  5. Wide audience: you pubblications will be ready Wold Wide;
Magazine using i3Factory editorial

 

 

ADVANTAGES

  • Economy of Scale: Buy one time license and create as many mobile publications as you wish in a just few clicks!
  • Earnings:Editors can offer prublication for free or not free.
  • Easy-to-use: Easily publish your magazione or publications from your  PDFs. i3Factory Editorial® technology automatically exports your links and your bookmarks from your PDF to your iPad & iPhone App.
  • Mobility: Consult your  publications offline , once downloaded the publication will be avaiable for  read it without you need any tipe of online connections.
  • Fast Download: all operation works on wifi or 3G data connections, Give your audience a great experience; with an internet connection the pages are immediately available as you flip through the document.
  • Sustainable Development: Go green. With i3Factory Editorial® all your publications have a positive carbon balance sheet. Help preserve our environment, save paper, reduce printing, save the trees and help decrease green house gases!
  • Personalization: Create your “Own Graphic interface” for your readers and a table of content for quick navigation.
  • Security: Host your publications on your own Internet or Intranet servers. Stay in full control of your interactive publications and your content (archives, subscription, sales campaign …).
  • Multimedia Content: Add clickable zones (go to page or links to websites) inside your interactive publication and/or PDF , HTML5. Engage readers withInteractivity & Videos from inside the pages of your publication.
  • Performance: You can find what you want in the blink of an eye.
  • Technology: i3Factory is a certified Application Factory. We are up to date with the latest technological developments, hence allowing us to provide you with the most high-performing tool on the market today.

    Be on the cutting edge of technology!

COSTS

Obviously prices will vary with respect to the need of the publisher, which normally requires some “customization”.

The starting price for our solution starts from 900 euros for small publishers, a solution that contains all the features necessary for most small-medium-sized publishers that starts from 1500 euros up to a maximum of € 5000 for medium and big publishers.

More Information on packages you can find on this editorial on this page:

New editorial system for iPad, iPhone & Android

or  direct on  i3F Editorial web site (http://i3factory.com/editorial)


 

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close