ifndef::doingwholebook[]
:leveloffset: -1
:data-uri:
:rq: ’
endif::doingwholebook[]
:chapnum: 15
:figure-number: 00
[[chap_id15]]
== Drawing
The views illustrated in xref:chap_id14[] were mostly colored rectangles; they had a `backgroundColor` and no more. But that's not what a real iOS program looks like. Everything the user sees is a UIView, and what the user sees is a lot more than a bunch of colored rectangles. That's because the views that the user sees have _content_. They contain _drawing_. pass:none[]
Many UIView subclasses, such as a UIButton or a UILabel, know how to draw themselves. Sooner or later, you're also going to want to do some drawing of your own. You can prepare your drawing as an image file beforehand. You can draw an image as your app runs, in code. You can display an image in a UIView subclass that knows how to show an image, such as a UIImageView or a UIButton. A pure UIView is all about drawing, and it leaves that drawing largely up to you; your code determines what the view draws, and hence what it looks like in your interface.
This chapter discusses the mechanics of drawing. Don't be afraid to write drawing code of your own! It isn't difficult, and it's often the best way to make your app look the way you want it to. (I'll discuss how to draw text in xref:chap_id23[].)
=== Images and Image Views
The basic general UIKit image class is UIImage. UIImage knows how to deal with many standard image types, such as HEIC, TIFF, JPEG, GIF, and PNG.
A UIImage can be used wherever an image is to be displayed; it knows how to provide the image data, and may be thought of loosely as wrapping the image data. It also provides supplementary information about its image, and lets you tweak certain aspects of the image's behavior.(((UIImage)))
pass:none[]
Where will the image data inside a UIImage come from? There are three main pass:[sources:]
* An image file previously stored on disk.
* An image that your app draws as it runs.
* Image data that your app downloads from the network.
The first two are what this chapter is about. Downloading image data is discussed in xref:chap_id37[].
// not sure this where this should go; I've waffled between books
// well, image literals are not working in Xcode 10 anyway, so the heck with the whole thing
////
.Fun with Literals
****
Starting in Xcode 8, you can define a UIColor through a color picker. If you type `color` and ask for code completion, you're offered a Swift _color literal_. Accept this, and a color palette appears; click Other and a full-fledged color picker appears. The chosen color appears in your code as a small rectangular swatch of that color (xref:FIGcolorLiteral[]). Behind the scenes, this equates to calling `#colorLiteral(red:green:blue:alpha)`.
(((color literal)))((("literal, color")))
[[FIGcolorLiteral]]
.A color literal
image::figs/pios_1401a.png[]
Also in Xcode 8, you can define a UIImage through its name alone. In a context where a UIImage is expected, start typing the name of the image as a literal (without quotation marks) and ask for code completion. You'll be offered your image's name as a completion, along with an actual thumbnail of the image. The code completion engine is looking for your image just as `init(named:)` would look for it, so you are guaranteed that the image is being correctly referred to. Accept this, and the thumbnail appears in your code (xref:FIGimageLiteral[]). Behind the scenes, this equates to calling `#imageLiteral(resourceName:)`, which itself calls `init(named:)`.
(((images, literal)))((("literal, image")))
[[FIGimageLiteral]]
.An image literal
image::figs/pios_1501aaa.png[]
****
////
==== Image Files
UIImage can read a stored file, so if an image does not need to be created dynamically, but has already been created before your app runs, then drawing may be as simple as providing an image file as a resource inside your app itself. When an image file is to be included inside your app, iOS has a special affinity for PNG files, and you should prefer them whenever possible.
(The converse operation, saving image data as an image file, is discussed in xref:chap_id36[].)
A pre-existing image file in your app's bundle is most commonly obtained in code through the UIImage initializer +init(named:)+, which takes a string and returns a UIImage wrapped in an Optional, in case the image doesn't exist.(((images, files))) This method looks in two places for the image:(((asset catalog, images)))(((files, image)))
Asset catalog:: We look in the asset catalog for an image set with the supplied name. The name is case-sensitive.
// this is main place in this book where I talk about asset catalogs, so should I be mentioning color literals here?
Top level of app bundle:: We look at the top level of the app's bundle for an image file with the supplied name. The name is case-sensitive and should include the file extension; if it doesn't, _.png_ is assumed.
When calling +init(named:)+, an asset catalog is searched before the top level of the app's bundle. If there are multiple asset catalogs, they are all searched, but the search order is indeterminate, so avoid multiple image sets with the same name.
TIP: The Image library lists images both in the asset catalog and at the app bundle's top level. Instead of calling `init(named:)`, which takes a literal string that you might type incorrectly, you can drag or double-click an image in the Image library to enter an _image literal_ directly into your code. The resulting token represents a call to the UIImage initializer `init(imageLiteralResourceName:)`, and produces a UIImage, not an Optional.(((images, literal)))((("literal, image")))
With +init(named:)+, the image data may be cached in memory, and if you ask for the same image by calling +init(named:)+ again later, the cached data may be supplied immediately. Caching is usually good, because decoding the image on disk into usable bitmap data is expensive.(((images, caching)))(((caching, images)))
Nevertheless, sometimes caching may not be what you want; if you know you're just going to fetch the image once and put it into the interface immediately, caching might represent an unnecessary strain on your app's memory. If so, there's another way: you can read an image file from your app bundle (not the asset catalog) directly and without caching, by calling +init(contentsOfFile:)+, which expects a pathname string. To obtain that pathname string, you can get a reference to your app's bundle with +Bundle.main+, and Bundle then provides instance methods for getting the pathname of a file within the bundle, such as `path(forResource:ofType:)`.(((app, bundle resources)))(((resources, app bundle)))
// Another consideration might be speed. When you use `init(named:)`, an image in the asset catalog is found quickly, but `init(contentsOfFile:)` finds an image file in the app bundle faster; and if you don't have to call `path(forResource:ofType:)` because you already know how to construct the pathname, that's even faster.
// I'm a little worried about saying this; do I really know it?
// NOTE to self: There is a really bizarre bug where a color png image in the asset catalog, if it uses exactly the same r, g, and b values, will be displayed as grayscale. Clearly the compilation process that optimizes png at build time has a mistake in it.
===== Hardware-related image variants
An image file can come in multiple variants for use on different hardware.
When the image file is stored in the app bundle, these variants are distinguished through the use of special name suffixes:(((high-resolution, image files)))pass:none[]
High-resolution variants:: On a device with a double-resolution screen, when an image is obtained by name from the app bundle, a file with the same name extended by +@2x+, if there is one, will be used automatically, with the resulting UIImage marked as double-resolution by assigning it a +scale+ property value of +2.0+.(((screen, high-resolution)))(((resolution))) Similarly, if there is a file with the same name extended by +@3x+, it will be used on a device with a triple-resolution screen, with a +scale+ property value of +3.0+.(((high-resolution, screen)))(((images, resolution)))
+
Double- and triple-resolution variants of an image file should have dimensions double and triple those of the base file. But thanks to the UIImage +scale+ property, a high-resolution variant of an image has the same CGSize as the single-resolution image. On a high-resolution screen, your code and your interface continue to work without change, but your images look sharper.
+
This works for UIImage `init(named:)` and `init(contentsOfFile:)`. If there is a file called _pic.png_ and a file called _pic@2x.png_, then on a device with a double-resolution screen, these methods will access _pic@2x.png_ as a UIImage with a scale of `2.0`:
+
----
let im = UIImage(named:"pic") // uses pic@2x.png
if let path = Bundle.main.path(forResource: "pic", ofType: "png") {
let im2 = UIImage(contentsOfFile:path) // uses pic@2x.png
}
----
// iOS 8 doesn't run on any single-resolution iPhone-sized devices, so an iPhone-only app doesn't need any single-resolution image variants. But iOS 8 does run on a single-resolution iPad.
Device type variants:: A file with the same name extended by +~ipad+ will automatically be used if the app is running natively on an iPad. You can use this in a universal app to supply different images automatically depending on whether the app runs on an iPhone (or iPod touch), on the one hand, or on an iPad, on the other. (This is true not just for images but for _any_ resource obtained by name from the bundle. See Apple's _Resource Programming Guide_ in the documentation archive.)(((iPad, resources that differ on)))(((resources, differing on iPad)))(((images, device-dependent)))
+
This works for UIImage `init(named:)` and Bundle `path(forResource:ofType:)`. If there is a file called _pic.png_ and a file called _pic\~ipad.png_,
then on an iPad, these methods will access _pic~ipad.png_:
+
----
let im = UIImage(named:"pic") // uses pic~ipad.png
let path = Bundle.main.path(
forResource: "pic", ofType: "png") // uses pic~ipad.png
----
// See https://stackoverflow.com/questions/49470039/loading-many-uiimages-from-disk-blocks-main-thread for some stuff about loading speed. Asset catalogs are fast by the way, and I should emphasize that that is one of their benefits. And there is a WDDC 2018 video that says more about asset catalog features, including how slicing works, how vectors work, etc. (though it turns out I had this mostly right anyway).
If possible, however, you will probably prefer to supply your image in an asset catalog rather than in the app bundle. This has the advantage, among other things, that you can forget all about those name suffix conventions! An asset catalog knows when to use an alternate image within an image set, not from its _name_, but from its _place_ in the catalog:(((images, asset catalog)))
* Put the single-, double-, and triple-resolution alternatives into the slots marked ``1x,'' ``2x,'' and ``3x'' respectively.
* For a distinct iPad variant of an image, check iPhone and iPad in the Attributes inspector for the image set, and separate slots for those device types will appear in the asset catalog.
* An image set in an asset catalog can make numerous further distinctions based on a device's processor type, wide color capabilities, and more.
Many of these distinctions are used not only by the runtime when the app runs, but also by the App Store when thinning your app for a specific target device.
// For these and other reasons, asset catalogs should be regarded as preferable over keeping your images at the top level of the app bundle.
===== Vector images
An image file in the asset catalog can be a vector-based PDF:(((PDF, image)))(((images, PDF)))(((images, vector)))pass:none[]
* If you switch the Scales pop-up menu to Single Scale and put the image into the single slot, it will be resized automatically for double or triple resolution, and because it's a vector image, the resizing will be sharp.
* If you switch the Scales pop-up menu to Individual and Single Scales and put the image also into the ``1x'' slot,
// No, that seems to have stopped working!
// Put the image into the single-resolution slot.
// okay screw it I'm just not going to include any instructions
and if you check Preserve Vector Data for this slot, the image will be resized sharply for _any_ size, either when scaled automatically (by a UIImageView or other interface item), or when your code scales the image by redrawing it (as I'll describe later in this chapter).
New in Xcode 11 and iOS 13, the system supplies more than 1500 standard named SVG _symbol images_ intended for use both as icons and in conjunction with text. To obtain one as a UIImage in code, call the UIImage initializer `init(systemName:)`. The symbol images are displayed along with their names in the SF Symbols app, available for
pass:[download]
from Apple.(((images, symbol)))(((SF Symbols)))(((symbol images)))
A few symbol images are so commonly used that they are vended directly as class properties of UIImage: `.add`, `.remove`, `.close`, `.actions`, `.checkmark`, and `.strokedCheckmark`. In the nib editor, an interface object that accepts an image, such as a UIButton, lets you specify a symbol image by name using a pop-up menu.
Certain details of how a symbol image is drawn may be dictated through its `symbolConfiguration` (UIImage.SymbolConfiguration). You can supply this when you create the image, or you can change it by calling the UIImage instance methods `.withConfiguration(_:)` or `.applyingSymbolConfiguration(_:)`. Alternatively, you can attach a symbol configuration to the image view that displays the symbol image. Configurations can involve one of nine weights, one of three scales, a font or text style, and a point size, in various combinations; this is to facilitate association with text.
I'll talk about that in detail in xref:chap_id23[].(((symbol images, configuration)))
===== Asset catalogs and trait collections
An asset catalog can distinguish between variants of an asset intended for different trait collections (xref:SECtraitCollections[]). The chief distinctions you might want to draw will involve size classes or user interface style (light and dark mode).(((asset catalog, trait collections)))
Consider an image that is to appear in different variants depending on the size class situation. In the Attributes inspector for your image set, use the Width Class and Height Class pop-up menus to specify which size class possibilities you want slots for. If we're on an iPhone with the app rotated to landscape orientation, and if there's both an Any Height and a Compact Height alternative in the image set, the Compact Height variant is used. These features are live as the app runs; if the app rotates from landscape to portrait, and there's both an Any height and a Compact height alternative in the image set, the Compact Height variant is _replaced_ with the Any Height variant in your interface, there and then, _automatically_.(((orientation, resources that depend on)))(((resources, depending on trait collection)))(((size classes, resources that depend on)))
In the same way, an image can vary depending on whether the environment is in light mode or dark mode. To display the necessary slots, in the Attributes inspector, use the Appearance pop-up menu. If you choose Any, Dark, you'll get a slot for light or unspecified mode and a slot for dark mode, which is usually what you want. Again, a UIImage obtained from the asset catalog is live, and will switch _automatically_ to the appropriate variant when the interface style changes. A named color defined in the asset catalog can make the same distinction, making it a dynamic color (as I described in xref:chap_id14[]).((("mode, light or dark", "images")))((("images", "mode, light or dark")))
If you need a specific trait collection variant of an image or named color in an asset catalog, and you know its name, you can call `init(named:in:compatibleWith:)`; the third parameter is the trait collection. But what if you _already_ have this UIImage or UIColor and you _don't_ know its name? For that matter, how does the interface in your running app, which _already_ contains a UIImage or a UIColor, automatically change when the trait collection changes? This magic is baked into UIImage and UIColor.(((trait collections, asset catalog)))
Let's start with UIImage. When an image is obtained from an asset catalog through UIImage +init(named:)+, its +imageAsset+ property is a pass:[UIImageAsset] that effectively points back into the asset catalog at the image set that it came from. Each image in the image set has a trait collection associated with it (its +traitCollection+). By calling the UIImageAsset method `image(with:)`, passing a trait collection, you can ask an image's +imageAsset+ for the image from the same image set appropriate to that trait collection.(((UIImageAsset)))
A built-in interface object that displays an image, such as a UIImageView, is automatically trait collection���aware; it receives the `traitCollectionDidChange(_:)` message and responds pass:[accordingly]. To demonstrate how this works under the hood, we can build a custom UIView with an +image+ property that behaves the same way:
----
class MyView: UIView {
var image : UIImage!
override func traitCollectionDidChange(_ prevtc: UITraitCollection?) {
super.traitCollectionDidChange(prevtc)
self.setNeedsDisplay() // causes draw(_:) to be called
}
override func draw(_ rect: CGRect) {
if var im = self.image {
if let asset = self.image.imageAsset {
im = asset.image(with:self.traitCollection)
}
im.draw(at:.zero)
}
}
}
----
The really interesting part is that no actual asset catalog is needed. You can treat images as trait-based alternatives for one another _without_ using an asset catalog. You might do this because your code has constructed the images from scratch or has obtained them over the network while the app is running. The technique is to instantiate a UIImageAsset and then associate each image with a different trait collection by _registering_ it with this same pass:[UIImageAsset]. Here's an example:
----
let tcreg = UITraitCollection(verticalSizeClass: .regular)
let tccom = UITraitCollection(verticalSizeClass: .compact)
let moods = UIImageAsset()
let frowney = UIImage(named:"frowney")!
let smiley = UIImage(named:"smiley")!
moods.register(frowney, with: tcreg)
moods.register(smiley, with: tccom)
----
The amazing thing is that if we now display either `frowney` or `smiley` in a UIImageView, we see the image associated with the environment's current vertical size class, and it automatically switches to the other image when the app changes orientation on an iPhone. Moreover, this works even though I didn't keep any persistent reference to `frowney`, `smiley`, or the UIImageAsset! (The reason is that the images are cached by the system and they maintain a strong reference to the pass:[UIImageAsset] with which they are registered.)
UIColor works in a simpler way. There is no UIColorAsset class. A dynamic color is declared by calling `init(dynamicProvider:)`, whose parameter is a function that takes a trait collection and returns a color. The knowledge of the color corresponding to a trait collection is baked directly into the dynamic color, and you can extract it by calling `resolvedColor(with:)`, passing a trait collection.(((color, dynamic)))((("mode, light or dark", "colors")))(((dynamic, color)))
// true but boring
// You can also specify a target trait collection while fetching an image from the asset catalog or from your app bundle, by calling +init(named:inBundle:compatibleWithTraitCollection:)+. The bundle specified will usually be `nil`, meaning the app's main bundle.
===== Namespacing image files
When image files are numerous or need to be clumped into groups, the question arises of how to divide them into namespaces. Here are some possibilities:(((folders, image files)))(((files, image, namespacing)))(((namespacing resources)))(((resources, namespacing)))(((asset catalog, folders)))
Folder reference:: Instead of keeping images at the top level of your app bundle, you can keep them in a _folder_ in the app bundle. This is easiest to maintain if you put a _folder reference_ into your project; the folder itself is then copied into the app bundle at build time, along with all its contents. There are various ways to retrieve an image in such a folder:
* Call UIImage `init(named:)` with the folder name and a forward slash in front of the image's name in the name string. If the folder is called _pix_ and the image file is called _pic.png_, then the ``name'' of the image is `"pix/pic.png"`.
* Call Bundle `path(forResource:ofType:inDirectory:)` to get the image file's path, followed by UIImage `init(contentsOfFile:)`.
* Obtain the bundle path (`Bundle.main.bundlePath`) and use NSString pathname and FileManager methods to drill down to the desired file.
Asset catalog folder:: An asset catalog can provide virtual folders that function as namespaces. Suppose that an image set _myImage_ is inside an asset catalog folder called _pix_; if you check Provides Namespace in the Attributes inspector for that folder, then the image can be accessed through UIImage `init(name:)` by the name `"pix/myImage"`.
Bundle:: A fuller form of `init(named:)` is `init(named:in:)`, where the second parameter is a bundle. This means you can keep images in a secondary bundle, such as a framework, and specify that bundle as a way of namespacing the image. This approach works regardless of whether the image comes from an asset catalog or sits at the top level of the pass:[bundle.]
===== Image files in the nib editor
Many built-in Cocoa interface objects will accept a UIImage as part of how they draw themselves; a UIButton can display an image, a UINavigationBar or a UITabBar can have a background image (xref:chap_id25[]), and so on. The image you want to supply will often come from an image file.
// When you configure an interface object's image in the nib editor, you're instructing that interface object to call +init(named:)+ to fetch its image, and everything about how +init(named:)+ conducts the search for the image will be true of how the interface object finds its image at runtime.
The nib editor stands ready to help you. The Attributes inspector of an interface object that can have an image will have a pop-up menu from which you can choose an image in your project ��� or, new in iOS 13, a built-in symbol image. Your project's images, as well as the built-in symbol images, are also listed in the Image library; from here, you can drag an image onto an interface object in the canvas, such as a button.(((nib editor, image views)))
==== Image Views
When you want an image to appear in your interface, not inside a button or other interface object but purely as an image, you'll probably hand it to an image view ��� a UIImageView ��� which has the most knowledge and flexibility with regard to displaying images and is intended for this purpose.(((UIImageView)))(((image views)))(((images, image views)))
An image view is the displayer of images _par excellence_. In code, just set the image as the image view's `image`. In the nib editor, drag the image from the Image library onto an image view or set its image through the Image pop-up menu, or drag an image from the Image library directly into a plain UIView to get a UIImageView whose image is that image.
TIP: New in iOS 13, an image view (or a UIButton, because its image is contained in an image view) can be configured to display a particular variant of any symbol image assigned to it by setting its `preferredSymbolConfiguration`; you can do that in code or in the nib editor.(((symbol images, image views)))
A UIImageView can actually have _two_ images, one assigned to its +image+ property and the other assigned to its +highlightedImage+ property; the value of the UIImageView's +isHighlighted+ property dictates which of the two is displayed at any given moment. A UIImageView does not automatically highlight itself merely because the user taps it, the way a button does. However, there are certain situations where a UIImageView will respond to the highlighting of its surroundings; within a table view cell, for instance, a UIImageView will show its highlighted image when the cell is highlighted (xref:chap_id21[]).
// You can also use the notion of UIImageView highlighting yourself however you like.
// boring, unlikely, and in docs
// The documentation warns that if a UIImageView is to be assigned multiple images (such as an +image+ and a +highlightedImage+), they must have the same +scale+ property value. This is because the UIImageView gets its own internal scaling information from an image's scale when it first encounters the image; it does not change its internal scale merely because you switch the value of its +isHighlighted+ property.
A UIImageView is a UIView, so it can have a background color in addition to its image, it can have an alpha (transparency) value, and so forth (see xref:chap_id14[]). An image may have areas that are transparent, and a UIImageView will respect this, so an image of any shape can appear. A UIImageView without a background color is invisible except for its image, so the image simply appears in the interface, without the user being aware that it resides in a rectangular host. A UIImageView without an image and without a background color is invisible, so you could start with an empty UIImageView in the place where you will later need an image and subsequently assign the image in code. You can assign a new image to substitute one image for another, or set the image view's +image+ property to `nil` to remove its image.
How a UIImageView draws its image depends upon the setting of its +contentMode+ property (pass:[UIView.ContentMode]); this property is actually inherited from UIView, and I'll discuss its more general purpose later in this chapter. +.scaleToFill+ means the image's width and height are set to the width and height of the view, filling the view completely even if this alters the image's aspect ratio; +.center+ means the image is drawn centered in the view without altering its size; and so on. Most commonly you'll use `.scaleAspectFit` or `.scaleAspectFill`; they both keep the image's aspect ratio while filling the image view. The difference is that `.scaleAspectFill` fills the image view in both dimensions, permitting some of the image to fall outside the image view. The best way to get a feel for the meanings of the various +contentMode+ settings is to experiment with an image view in the nib editor: in the image view's Attributes inspector, change the Content Mode pop-up menu to see where and how the image draws itself.
You should also pay attention to a UIImageView's +clipsToBounds+ property; if it is `false`, its image, even if it is larger than the image view and even if it is not scaled down by the +contentMode+, may be displayed in its entirety, extending beyond the image view itself.
// WARNING: By default, the `clipsToBounds` of a UIImageView dragged into the nib editor from the Library is `false`. This is unlikely to be what you want!
// okay they seem to have fixed this
When creating a UIImageView in code, you can take advantage of a convenience initializer, `init(image:)`. The default +contentMode+ is +.scaleToFill+, but the image is not initially scaled; rather, _the image view itself is sized to match its image_. You will still probably need to position the UIImageView correctly in its superview. In this example, I'll put a picture of the planet Mars in the center of the app's interface (xref:FIGplainOldMars[]; for the CGRect `center` property, see xref:appb[]):
----
let iv = UIImageView(image:UIImage(named:"Mars"))
self.view.addSubview(iv)
iv.center = iv.superview!.bounds.center
iv.frame = iv.frame.integral
----
[[FIGplainOldMars]]
.Mars appears in my interface
image::figs/pios_1501.png[]
What happens to the size of an existing UIImageView when you assign a new image to it depends on whether the image view is using autolayout. Under autolayout, the size of the image becomes the image view's +intrinsicContentSize+, so the image view _adopts the image's size_ unless other constraints prevent.(((autolayout, image view)))
// true but boring
// If an image view's `adjustsImageSizeForAccessibilityContentSizeCategory` is `true`, the image view will scale itself up from the image's intrinsic content size if the user switches to an accessibility text size (see xref:chap_id23[]). You can set this property in the nib editor (Adjusts Image Size in the Attributes inspector).
// also for UIButton and NSTextAttachment but I can't think where to say that.
// boring
// (If a UIImageView is assigned both an +image+ and a +highlightedImage+, and if they are of different sizes, the view's +intrinsicContentSize+ adopts the size of the +image+.)
An image view automatically acquires its `alignmentRectInsets` (see xref:chap_id14[]) from its image's `alignmentRectInsets`. If you're going to be aligning the image view to some other object using autolayout, you can attach appropriate `alignmentRectInsets` to the image that the image view will display, and the image view will do the right thing. To do so in code, derive a new image by calling the original image's `withAlignmentRectInsets(_:)` method;
alternatively, you can set an image's `alignmentRectInsets` in the asset catalog (use the four Alignment fields).
// still broken in 7.1, in the sense that you must apply a left inset in the asset catalog or your other insets are disregarded; but it's just not worth yanking their chain about it here, as I have their attention already
// taking out this warning and adding new sentence; bugs seems to be fixed
// WARNING: In theory, you should be able to set an image's `alignmentRectInsets` in an asset catalog (using the image's Alignment fields). As of this writing, however, this feature is not working correctly.
==== Resizable Images
Certain interface contexts require an image that can be coherently resized to any desired proportions. A custom image that serves as the track of a slider or progress view (xref:chap_id25[]) must be able to fill a space of any length.
// And there can frequently be other situations where you want to fill a background by tiling or stretching an existing image.
Such an image is called a _resizable image_.
To make a resizable image in code, start with a normal image and call its +resizableImage(withCapInsets:resizingMode:)+ method.(((resizable image)))(((images, resizable))) The +capInsets:+ argument is a UIEdgeInsets, whose components represent distances inward from the edges of the image. In a context larger than the image, a resizable image can behave in one of two ways, depending on the +resizingMode:+ value (pass:[UIImage.ResizingMode]):
+.tile+:: The interior rectangle of the inset area is tiled (repeated) in the interior; each edge is formed by tiling the corresponding edge rectangle outside the inset area. The four corner rectangles outside the inset area are drawn unchanged.
+.stretch+:: The interior rectangle of the inset area is stretched _once_ to fill the interior; each edge is formed by stretching the corresponding edge rectangle outside the inset area _once_. The four corner rectangles outside the inset area are drawn unchanged.
In these examples, assume that +self.iv+ is a UIImageView with absolute height and width (so that it won't adopt the size of its image) and with a +contentMode+ of +.scaleToFill+ (so that the image will exhibit resizing behavior). First, I'll illustrate tiling an entire image (xref:FIGtiledMars[]); note that the +capInsets:+ is +.zero+, meaning no insets at all:(((tiling, resizable image)))
----
let mars = UIImage(named:"Mars")!
let marsTiled =
mars.resizableImage(withCapInsets:.zero, resizingMode: .tile)
self.iv.image = marsTiled
----
[[FIGtiledMars]]
.Tiling the entire image of Mars
image::figs/pios_1502.png[]
Now we'll tile the interior of the image, changing the +capInsets:+ argument from the previous code (xref:FIGtiledMars2[]):
----
let marsTiled = mars.resizableImage(withCapInsets:
UIEdgeInsets(
top: mars.size.height / 4.0,
left: mars.size.width / 4.0,
bottom: mars.size.height / 4.0,
right: mars.size.width / 4.0
), resizingMode: .tile)
----
[[FIGtiledMars2]]
.Tiling the interior of Mars
image::figs/pios_1503.png[]
Next, I'll illustrate stretching.(((stretching a resizable image))) We'll start by changing just the +resizingMode:+ from the previous code (xref:FIGstretchedMars1[]):
----
let marsTiled = mars.resizableImage(withCapInsets:
UIEdgeInsets(
top: mars.size.height / 4.0,
left: mars.size.width / 4.0,
bottom: mars.size.height / 4.0,
right: mars.size.width / 4.0
), resizingMode: .stretch)
----
[[FIGstretchedMars1]]
.Stretching the interior of Mars
image::figs/pios_1504.png[]
A common stretching strategy is to make almost half the original image serve as a cap inset, leaving just a tiny rectangle in the center that must stretch to fill the entire interior of the resulting image (xref:FIGstretchedMars2[]):
----
let marsTiled = mars.resizableImage(withCapInsets:
UIEdgeInsets(
top: mars.size.height / 2.0 - 1,
left: mars.size.width / 2.0 - 1,
bottom: mars.size.height / 2.0 - 1,
right: mars.size.width / 2.0 - 1
), resizingMode: .stretch)
----
[[FIGstretchedMars2]]
.Stretching a few pixels at the interior of Mars
image::figs/pios_1505.png[]
// You should also experiment with different scaling +contentMode+ settings.
In the preceding example, if the image view's +contentMode+ is +.scaleAspectFill+, and if the image view's +clipsToBounds+ is `true`, we get a sort of gradient effect, because the top and bottom of the stretched image are outside the image view and aren't drawn (xref:FIGstretchedMars3[]).
[[FIGstretchedMars3]]
.Mars, stretched and clipped
image::figs/pios_1505b.png[]
Alternatively, you can configure a resizable image in the asset catalog. It is often the case that a particular image will be used in your app chiefly as a resizable image, and always with the same +capInsets:+ and +resizingMode:+, so it makes sense to configure this image once rather than having to repeat the same code.(((asset catalog, images, slicing)))(((slicing in asset catalog)))
// And even if an image is configured in the asset catalog to be resizable, it can appear in your interface as a normal image as well ��� for example, if you assign it to an image view that resizes itself to fit its image, or that doesn't scale its image.
To configure an image in an asset catalog as a resizable image, select the image and, in the Slicing section of the Attributes inspector, change the Slices pop-up menu to Horizontal, Vertical, or Horizontal and Vertical. When you do this, additional interface appears. You can specify the +resizingMode+ with the Center pop-up menu. You can work numerically, or click Show Slicing at the lower right of the canvas and work graphically.
// The graphical editor is zoomable, so zoom in to work comfortably.
This feature is even more powerful than +resizableImage(withCapInsets:resizingMode:)+. It lets you specify the end caps _separately_ from the tiled or stretched region, with the rest of the image being sliced out. In xref:FIGstretchedMars4[], the dark areas at the top left, top right, bottom left, and bottom right will be drawn as is; the narrow bands will be stretched, and the small rectangle at the top center will be stretched to fill most of the interior; but the rest of the image, the large central area covered by a sort of gauze curtain, will be omitted entirely. The result is shown in xref:FIGstretchedMars5[].
[[FIGstretchedMars4]]
.Mars, sliced in the asset catalog
image::figs/pios_1505c.png[]
[[FIGstretchedMars5]]
.Mars, sliced and stretched
image::figs/pios_1505d.png[]
// WARNING: Don't use image slicing in an asset catalog if your deployment target isn't 7.0 or higher.
==== Transparency Masks
Certain interface contexts, such as buttons and button-like interface objects, want to treat an image as a _transparency mask_, also known as a _template_.
// Okay, not all buttons, of course, but let's keep it simple
This means that the image color values are ignored, and only the transparency (alpha) values of each pixel matter. The image shown on the screen is formed by combining the image's transparency values with a single tint color.(((images, template)))(((template images)))(((transparency, mask)))
// or the image of a bar button item of style +UIBarButtonItemStylePlain+ in a toolbar.
The way an image will be treated is a property of the image, its +renderingMode+. This property is read-only; to change it in code, start with an image and generate a new image with a different rendering mode, by calling its `withRenderingMode(_:)` method.
The rendering mode values (pass:[UIImage.RenderingMode]) are:
* +.automatic+
* +.alwaysOriginal+
* +.alwaysTemplate+
The default is +.automatic+, which means that the image is drawn normally except in those particular contexts that want to treat it as a transparency mask. With the other two rendering mode values, you can _force_ an image to be drawn normally, even in a context that would usually treat it as a transparency mask, or you can _force_ an image to be treated as a transparency mask, even in a context that would otherwise treat it normally.
////
Apple wants iOS 7 apps to adopt more of a transparency mask look throughout the interface; some of the icons in the Settings app, for example, appear to be transparency masks (xref:FIGsettingsIcons[]).
[[FIGsettingsIcons]]
.Transparency mask icons in the Settings app
image::figs/pios_1505f.png[]
////
To accompany this feature, iOS gives every UIView a +tintColor+, which will be used to tint any template images it contains.(((tint color)))(((views, tint color))) Moreover, this +tintColor+ by default is inherited down the view hierarchy, and indeed throughout the entire app, starting with the window (xref:chap_id14[]). Assigning your app's main window a tint color is probably one of the few changes you'll make to the window; otherwise, your app adopts the system's blue tint color. (Alternatively, if you're using a main storyboard, set the Global Tint color in the File inspector.) Individual views can be assigned their own tint color, which is inherited by their subviews. xref:FIGrenderingMode[] shows two buttons displaying the same background image, one in normal rendering mode, the other in template rendering mode, in an app whose window tint color is red. (I'll say more about template images and +tintColor+ in xref:chap_id25[].)
[[FIGrenderingMode]]
.One image in two rendering modes
image::figs/pios_1505e.png[]
You can assign an image a rendering mode in the asset catalog. Select the image set in the asset catalog, and use the Render As pop-up menu in the Attributes inspector to set the rendering mode to Default (+.automatic+), Original Image (+.alwaysOriginal+), or Template Image (+.alwaysTemplate+). This is an excellent approach whenever you have an image that you will use primarily in a specific rendering mode, because it saves you from having to remember to set that rendering mode in code every time you fetch the image. Instead, any time you call +init(named:)+, this image arrives with the rendering mode already set.
(The symbol images introduced in iOS 13 have no color of their own, so in effect they are _always_ template images.)
Also new in iOS 13, a tint color can be applied to a UIImage directly; call `withTintColor(_:)` or `withTintColor(_:renderingMode:)`. This is useful particularly when you want to draw a symbol image or a template image in a context where there is no inherited tint color (such as a graphics context).(((images, tint color)))(((tint color, image)))
Nonetheless, I find the behavior of these methods rather weird:
Original images become template images:: If you apply `withTintColor` to an ordinary image, it is then treated as a template image ��� even if you also set the rendering mode to `.alwaysOriginal`.
Template images may ignore the assigned tint color:: If you apply `withTintColor(_:)` to a template image ��� because it's a symbol image, or because you said `.alwaysTemplate`, or because we're in a context that treats an image as a transparency mask ��� then if you assign it into an view with a `tintColor` of its own, the tint color you specify is ignored! The view's tint color wins. If you want the tint color you specify to be obeyed, you must also set the rendering mode to `.alwaysOriginal`.
For example, the following code specifically sets a symbol image's tint color to red; nevertheless, what appears on the screen is a blue symbol image (because the default image view `tintColor` is blue):
----
let im = UIImage(systemName:"circle.fill")?.withTintColor(.red)
let iv = UIImageView(image:im)
self.view.addSubview(iv)
----
To get a red symbol image, you have to say this:
----
let im = UIImage(systemName:"circle.fill")?.withTintColor(.red,
renderingMode: .alwaysOriginal) // *
let iv = UIImageView(image:im)
self.view.addSubview(iv)
----
// NOTE to self: I've filed a bug. Keep an eye on this!
// ok, so an image can be a template image because (1) it's declared template, or (2) it's in a template context, or (3) it's a symbol image. And in any of those cases, saying `withTintColor` fails! It turns a nontemplate into a template, but the tint color comes from the context, not from what you said.
// an image can be original because you declared it original or because it's original in an original context. And in that case, saying `withTintColor` succeeds.
// It could even be worth keeping more than one copy of the same image in the asset catalog, under different names and with different rendering modes.
==== Reversible Images
The entire interface is automatically reversed when your app runs on a system for which your app is localized if the system language is right-to-left. In general, this probably won't affect your images. The runtime assumes that you _don't_ want images to be reversed when the interface is reversed, so its default behavior is to leave them alone.(((interface, reversing)))(((images, reversing)))
Nevertheless, you _might_ want an image to be reversed when the interface is reversed. Suppose you've drawn an arrow pointing in the direction from which new interface will arrive when the user taps a button. If the button pushes a view controller onto a navigation interface, that direction is from the right on a left-to-right system, but from the left on a right-to-left system. This image has directional meaning within the app's own interface; it needs to flip horizontally when the interface is reversed.
To make this possible in code, call the image's `imageFlippedForRightToLeftLayoutDirection` method and use the resulting image in your interface. On a left-to-right system, the normal image will be used; on a right-to-left system, a reversed variant of the image will be created and used automatically. You can override this behavior, even if the image is reversible, for a particular UIView displaying the image, such as a UIImageView, by setting that view's `semanticContentAttribute` to prevent pass:[mirroring.]
You can make the same determination for an image in the asset catalog using the Direction pop-up menu (choose one of the Mirrors options). Moreover, the layout direction (as I mentioned in xref:chap_id14[]) is a trait,
// . This means that, just as you can have pairs of images to be used on iPhone or iPad, or triples of images to be used on single-, double-, or triple-resolution screens,
so you can have pairs of images to be used under left-to-right or right-to-left layout. The easy way to configure such pairs is to choose Both in the asset catalog's Direction pop-up menu; now there are left-to-right and right-to-left image slots where you can place your images. Alternatively, you can register the paired images with a UIImageAsset in code, as I demonstrated earlier in this chapter.
You can also force an image to be flipped horizontally without regard to layout direction or semantic content attribute by calling its `withHorizontallyFlippedOrientation` method.
[[SECgraphicscontexts]]
=== Graphics Contexts
Instead of plopping an image from an existing image file directly into your interface, you may want to create some drawing yourself, in code. To do so, you will need a _graphics context_. This is where the fun really begins!pass:none[]
A graphics context is basically a place you can draw. Conversely, you can't draw in code unless you've got a graphics context. There are several ways in which you might obtain a graphics context; these are the most common:(((images, drawing)))(((drawing, image)))(((graphics context, drawing into))) pass:none[]
Cocoa creates the graphics context:: You subclass UIView and override `draw(_:)`. At the time your `draw(_:)` implementation is called, Cocoa has already created a graphics context and is asking you to draw into it, right now; whatever you draw is what the UIView will pass:[display.]
Cocoa passes you a graphics context:: You subclass CALayer and override `draw(in:)`, or else you give a CALayer a delegate and implement the delegate's `draw(_:in:)`. The `in:` parameter is a graphics context. (Layers are discussed in xref:chap_id16[].)
You create an ((image context)):: The preceding two ways of getting a graphics context amount to drawing _on demand:_ you slot your drawing code into the right place, and it is called whenever drawing needs to happen. The other major way to draw is just to make a UIImage yourself, once and for all. To create the graphics context that generates the image, you use a UIGraphicsImageRenderer.
Moreover, at any given moment there either is or is not a _current_ graphics context:(((current graphics context)))(((graphics context, current)))
* When UIView's `draw(_:)` is called, the UIView's drawing context is already the current graphics context.
* When CALayer's `draw(in:)` or its delegate's `draw(_:in:)` is called, the `in:` parameter is a graphics context, but it is _not_ the current context. It's up to you to make it current if you need to.
* When you create an image context, that image context automatically becomes the current graphics context.
What beginners find most confusing about drawing is that there are two sets of tools for drawing, which take different attitudes toward the context in which they will draw. One set needs a current context; the other just needs a context:
UIKit:: Various Cocoa classes know how to draw themselves; these include UIImage, NSString (for drawing text), UIBezierPath (for drawing shapes), and UIColor. Some of these classes provide convenience methods with limited abilities; others are extremely powerful. In many cases, UIKit will be all you'll need.
+
With UIKit, you can draw _only into the current context_. If there's already a current context, you just draw. But with CALayer, where you are handed a context as a parameter, if you want to use the UIKit convenience methods, you'll have to make that context the current context; you do this by calling `UIGraphicsPushContext(_:)` (and be sure to restore things with +UIGraphicsPopContext+ later).
Core Graphics:: This is the full drawing API. Core Graphics, often referred to as Quartz, or Quartz 2D, is the drawing system that underlies all iOS drawing; UIKit drawing is built on top of it. It is low-level and consists of C functions (though in Swift these are mostly ``renamified'' to look like method calls). There are a lot of them! This chapter will familiarize you with the fundamentals; for complete information, you'll want to study Apple's _Quartz 2D Programming Guide_ in the documentation archive.
+
With Core Graphics, you must _specify a graphics context_ (a CGContext) to draw into, explicitly, for each bit of your drawing.(((CGContext))) With CALayer, you are handed the context as a parameter, and that's the graphics context you want to draw into. But if there is already a current context, you have no reference to it until you call +UIGraphicsGetCurrentContext+ to obtain it.
You don't have to use UIKit or Core Graphics _exclusively_. On the contrary, you can intermingle UIKit calls and Core Graphics calls in the same chunk of code to operate on the same graphics context. They merely represent two different ways of telling a graphics context what to do.
We have two sets of tools and three ways in which a context might be supplied; that makes six ways of drawing. I'll now demonstrate all six of them! To do so, I'll draw a blue circle (xref:FIGbluecircle[]). Without worrying just yet about the actual drawing commands, focus your attention on how the context is specified and on whether we're using UIKit or Core Graphics.
[[FIGbluecircle]]
.A blue circle
image::figs/pios_1506a.png[]
// incredibly terrifyingly cool utilities https://github.com/CodingMeSwiftly/UIBezierPath-Superpowers
==== Drawing on Demand
There are four ways of drawing on demand, and I'll start with those. First, I'll implement a UIView subclass's `draw(_:)`, using UIKit to draw into the current context, which Cocoa has already prepared for me:(((drawing, view)))(((views, drawing)))
----
override func draw(_ rect: CGRect) {
let p = UIBezierPath(ovalIn: CGRect(0,0,100,100))
UIColor.blue.setFill()
p.fill()
}
----
Now I'll do the same thing with Core Graphics; this will require that I first get a reference to the current context:
----
override func draw(_ rect: CGRect) {
let con = UIGraphicsGetCurrentContext()!
con.addEllipse(in:CGRect(0,0,100,100))
con.setFillColor(UIColor.blue.cgColor)
con.fillPath()
}
----
Next, I'll implement a CALayer delegate's `draw(_:in:)`. In this case, we're handed a reference to a context, but it isn't the current context. So I have to make it the current context in order to use UIKit (and I must remember to stop making it the current context when I'm done drawing):
----
override func draw(_ layer: CALayer, in con: CGContext) {
UIGraphicsPushContext(con)
let p = UIBezierPath(ovalIn: CGRect(0,0,100,100))
UIColor.blue.setFill()
p.fill()
UIGraphicsPopContext()
}
----
To use Core Graphics in a CALayer delegate's `draw(_:in:)`, I simply keep referring to the context I was handed:
----
override func draw(_ layer: CALayer, in con: CGContext) {
con.addEllipse(in:CGRect(0,0,100,100))
con.setFillColor(UIColor.blue.cgColor)
con.fillPath()
}
----
==== Drawing a UIImage
Now I'll make a UIImage of a blue circle. We can do this at any time (we don't need to wait for some particular method to be called) and in any class (we don't need to be in a UIView subclass).
To construct a UIImage in code, use a UIGraphicsImageRenderer. The basic technique is to create the renderer and call its `image` method to obtain the UIImage, handing it a function containing your drawing instructions.
////
The old way of doing this, in iOS 9 and before, was as follows:
1. You call +UIGraphicsBeginImageContextWithOptions+. It creates an image context and makes it the current context.
2. You draw, thus generating the image.
3. You call +UIGraphicsGetImageFromCurrentImageContext+ to extract an actual UIImage from the image context.
4. You call +UIGraphicsEndImageContext+ to dismiss the context.
The desired image is the result of step 3, and now you can display it in your interface, draw it into some other graphics context, save it as a file, or whatever you like.
Starting in iOS 10, +UIGraphicsBeginImageContextWithOptions+ is superseded by pass:[UIGraphicsImageRenderer] (though you can still use the old way if you want to). The reason for this change is that the old way assumed you wanted an sRGB image with 8-bit color pixels, whereas the introduction of the iPad Pro 9.7-inch and iPhone 7 makes that assumption wrong: they can display ``wide color,'' meaning that you pass:[probably] want a P3 image with 16-bit color pixels. UIGraphicsImageRenderer knows how to make such an image, and will do so by default if we're running on a ``wide color'' device.
Another nice thing about UIGraphicsImageRenderer is that its `image` method takes a function containing your drawing commands and returns the image. Thus there is no need for the step-by-step imperative style of programming required by +UIGraphicsBeginImageContextWithOptions+, where after drawing you had to remember to fetch the image and dismiss the context yourself. Moreover, UIGraphicsImageRenderer doesn't have to be torn down after use; if you know that you're going to be drawing multiple images with the same size and format, you can keep a reference to the renderer and call its `image` method again.
// In this edition of the book, therefore, I will adopt UIGraphicsImageRenderer throughout. If you need to know the details of +UIGraphicsBeginImageContextWithOptions+, consult an earlier edition.
// (If you need a backward compatible way to draw an image ��� you want to use +UIGraphicsBeginImageContextWithOptions+ on iOS 9 and before, but UIGraphicsImageRenderer on iOS 10 and later ��� see xref:appb[], which provides a utility function for that purpose.)
////
In this example, I draw my image using UIKit:
----
let r = UIGraphicsImageRenderer(size:CGSize(100,100))
let im = r.image { _ in
let p = UIBezierPath(ovalIn: CGRect(0,0,100,100))
UIColor.blue.setFill()
p.fill()
}
// im is the blue circle image, do something with it here ...
----
And here's the same thing using Core Graphics:
----
let r = UIGraphicsImageRenderer(size:CGSize(100,100))
let im = r.image { _ in
let con = UIGraphicsGetCurrentContext()!
con.addEllipse(in:CGRect(0,0,100,100))
con.setFillColor(UIColor.blue.cgColor)
con.fillPath()
}
// im is the blue circle image, do something with it here ...
----
// (Instead of calling `image`, you can call UIGraphicsImageRenderer methods that generate JPEG or PNG image data, suitable for saving as an image file.)
In those examples, we're calling UIGraphicsImageRenderer's `init(size:)` and accepting its default configuration, which is usually what's wanted. To configure the image context further, call the UIGraphicsImageRendererFormat class method `default`, configure the format through its properties, and pass it to pass:[UIGraphicsImageRenderer���s] `init(size:format:)`. Those properties are:
`opaque`:: By default, `false`; the image context is transparent. If `true`, the image context is opaque and has a black background, and the resulting image has no pass:[transparency].
`scale`:: By default, the same as the scale of the main screen, `UIScreen.main.scale`. This means that the resolution of the resulting image will be correct for the device we're running on.
`preferredRange`:: The color gamut. Your choices are (pass:[UIGraphicsImageRendererFormat.Range]):
* `.standard`
* `.extended`
* `.automatic` (same as `.extended` if we're running on a device that supports ``wide color'')
// TIP: Starting in iOS 11, you can call a UIGraphicsImageRendererFormat initializer, `init(for:)`, which takes a UITraitCollection; typically, this will be `self.traitCollection`, and the `scale` and `prefersExtendedRange` properties of the renderer will be set from the current environment.
// but I see no advantage to this, so I have not changed my code; perhaps the point is only that people were not discovering `default()` ��� I certainly had trouble with it at first
A single parameter (ignored in the preceding examples) arrives into the pass:[UIGraphicsImageRenderer���s] `image` function. It's a UIGraphicsImageRendererContext. This provides access to the configuring pass:[UIGraphicsImageRendererFormat] (its `format`). It also lets you obtain the graphics context (its `cgContext`); you can alternatively get this by calling `UIGraphicsGetCurrentContext`, and the preceding code does so, for consistency with the other ways of drawing. In addition, the UIGraphicsImageRendererContext can hand you a copy of the image as drawn up to this point (its `currentImage`); also, it implements a few basic drawing commands of its own.
[[SECimageDrawing]]
=== UIImage Drawing
A UIImage provides methods for drawing itself into the current context. We already know how to obtain a UIImage, and we already know how to obtain a graphics context and make it the current context, so we are ready to experiment with these methods.
Here, I'll make a UIImage consisting of two pictures of Mars side by side (xref:FIGdoubleMars[]):(((images, drawing)))(((drawing, image)))
----
let mars = UIImage(named:"Mars")!
let sz = mars.size
let r = UIGraphicsImageRenderer(size:CGSize(sz.width*2, sz.height),
format:mars.imageRendererFormat)
let im = r.image { _ in
mars.draw(at:CGPoint(0,0))
mars.draw(at:CGPoint(sz.width,0))
}
----
[[FIGdoubleMars]]
.Two images of Mars combined side by side
image::figs/pios_1506.png[]
Observe that image scaling works perfectly in that example. If we have multiple resolution variants of our original Mars image, the correct one for the current device is used, and is assigned the correct +scale+ value. The image context that we are drawing into also has the correct +scale+ by default. And the resulting image `im` has the correct +scale+ as well. Our code produces an image that looks correct on the current device, whatever its screen resolution may be.
TIP: If your purpose in creating an image graphics context is to draw an existing UIImage into it, you can gain some efficiency by initializing the image renderer's format to the image's `imageRendererFormat`.
Additional UIImage methods let you scale an image into a desired rectangle as you draw (effectively resizing the image), and specify the compositing (blend) mode whereby the image should combine with whatever is already present. To illustrate, I'll create an image showing Mars centered in another image of Mars that's twice as large, using the `.multiply` blend mode (xref:FIGdoubleMars2[]):(((images, resizing)))(((resizing an image)))(((scaling an image)))
----
let mars = UIImage(named:"Mars")!
let sz = mars.size
let r = UIGraphicsImageRenderer(size:CGSize(sz.width*2, sz.height*2),
format:mars.imageRendererFormat)
let im = r.image { _ in
mars.draw(in:CGRect(0,0,sz.width*2,sz.height*2))
mars.draw(in:CGRect(sz.width/2.0, sz.height/2.0, sz.width, sz.height),
blendMode: .multiply, alpha: 1.0)
}
----
[[FIGdoubleMars2]]
.Two images of Mars in different sizes, composited
image::figs/pios_1507.png[]
Redrawing an image at a smaller size is of particular importance in iOS programming, because it is a waste of valuable memory to hand a UIImageView a large image and ask the image view to display it smaller. Some frameworks such as Image I/O (xref:chap_id36[]) and PhotoKit (xref:chap_id30[]) allow you to load a downsized image thumbnail directly, but sometimes you'll need to downscale an image to fit within a given size yourself. For a general utility method that downsizes a UIImage to fit within a given CGSize, see xref:appb[].
Sometimes, you may want to extract a smaller region of the original image ��� effectively cropping the image as you draw it. Unfortunately, there is no UIImage drawing method for specifying the source rectangle. You can work around this by creating a smaller graphics context and positioning the image drawing so that the desired region falls into it. There is no harm in doing this, and it's a perfectly standard strategy; what falls outside the graphics context simply isn't drawn.
To obtain an image of the right half of Mars, you can make a graphics context half the width of the +mars+ image, and then draw +mars+ shifted left, so that only its right half intersects the graphics context (xref:FIGhalfMars[]):(((images, cropping)))(((cropping an image)))
----
let mars = UIImage(named:"Mars")!
let sz = mars.size
let r = UIGraphicsImageRenderer(size:CGSize(sz.width/2.0, sz.height),
format:mars.imageRendererFormat)
let im = r.image { _ in
mars.draw(at:CGPoint(-sz.width/2.0,0))
}
----
[[FIGhalfMars]]
.Half the original image of Mars
image::figs/pios_1508.png[]
A nice feature of UIGraphicsImageRenderer is that we can initialize it with a bounds instead of a size. Instead of drawing +mars+ shifted left, we can achieve the same effect by drawing +mars+ at `.zero` into a bounds that is shifted right:
----
let mars = UIImage(named:"Mars")!
let sz = mars.size
let r = UIGraphicsImageRenderer(
bounds:CGRect(sz.width/2.0, 0, sz.width/2.0, sz.height),
format:mars.imageRendererFormat)
let im = r.image { _ in
mars.draw(at:.zero)
}
----
Vector images work like normal images. A PDF vector image in the asset catalog for which you have checked Preserve Vector Data will scale sharply when you call `draw(in:)`, and a symbol image always scales sharply:
----
let symbol = UIImage(systemName:"rhombus")!
let sz = CGSize(100,100)
let r = UIGraphicsImageRenderer(size:sz)
let im = r.image {_ in
symbol.withTintColor(.purple).draw(in:CGRect(origin:.zero, size:sz))
}
----
The resulting rhombus is purple (because we gave the image a tint color before drawing it) and smoothly drawn at 100��100 (because it's a vector image). But of course, once you've drawn the vector image into a UImage (like our `im`), _that_ image is _not_ a vector image, so it doesn't scale sharply.
It is better, however, not to do what I just did. You really should try not to call `draw(in:)` on a symbol image. Instead, generate a UIImage with a custom symbol configuration, specifying a point size, and call `draw(at:)`, letting the symbol image size itself according to the point size you provided.
=== CGImage Drawing
The Core Graphics analog to UIImage is CGImage.(((CGImage))) In essence, a UIImage is (usually) a wrapper for a CGImage: the UIImage is bitmap image data plus scale, orientation, and other information, whereas the CGImage is the bare bitmap image data alone. The two are easily converted to one another: a UIImage has a +cgImage+ property that accesses its Quartz image data, and you can make a UIImage from a CGImage using +init(cgImage:)+ or +init(cgImage:scale:orientation:)+.
A CGImage lets you create a new image cropped from a rectangular region of the original image, which you can't do with UIImage. (A CGImage has other powers a UIImage doesn't have; for instance, you can apply an image mask to a CGImage.) I'll demonstrate by splitting the image of Mars in half and drawing the two halves pass:[separately] (xref:FIGmarsSplit[]):
----
let mars = UIImage(named:"Mars")!
// extract each half as CGImage
let marsCG = mars.cgImage!
let sz = mars.size
let marsLeft = marsCG.cropping(to:
CGRect(0,0,sz.width/2.0,sz.height))!
let marsRight = marsCG.cropping(to:
CGRect(sz.width/2.0,0,sz.width/2.0,sz.height))!
let r = UIGraphicsImageRenderer(size: CGSize(sz.width*1.5, sz.height),
format:mars.imageRendererFormat)
let im = r.image { ctx in
let con = ctx.cgContext
con.draw(marsLeft, in:
CGRect(0,0,sz.width/2.0,sz.height))
con.draw(marsRight, in:
CGRect(sz.width,0,sz.width/2.0,sz.height))
}
----
[[FIGmarsSplit]]
.Image of Mars split in half (badly)
image::figs/pios_1509.png[]
////
Something I don't discuss here is the question of efficiency. According to David Duncan, CGImageCreateWithImageInRect used to trade memory for speed in iOS 3 and before by possibly caching the decoded image, so using it twice in succession as I do here was fast (but perhaps memory-intensive). In iOS 4, every use of CGImageCreateWithImageInRect may require a fresh copy of the full decoded image (probably without caching it). Thus my approach illustrated above is inefficient. I suspect that a more efficient approach might be to create the CGImage only once, then set a clipping rectangle path and draw the entire image appropriately offset. But even that might not be true; perhaps the CGImage is created in memory freshly every time you draw with it. Ultimately I know nothing of this topic and Apple has no official information whatever about it, so I don't treat it in the book.
////
Well, _that_ was a train wreck! In the first place, the drawing is upside-down. It isn't rotated; it's mirrored top to bottom, or, to use the technical term, _flipped_. This pass:[phenomenon] can arise when you create a CGImage and then draw it, and is due to a mismatch in the native coordinate systems of the source and target contexts.(((flipping)))
////
There are various ways of compensating for this mismatch between the coordinate systems. One is to draw the CGImage into an intermediate UIImage and extract _another_ CGImage from that. xref:EXflip[] presents a utility function for doing this.
[[EXflip]]
.Utility for flipping an image drawing
====
----
func flip (_ im: CGImage) -> CGImage {
let sz = CGSize(CGFloat(im.width), CGFloat(im.height))
let r = UIGraphicsImageRenderer(size:sz)
return r.image { ctx in
ctx.cgContext.draw(im, in: CGRect(0, 0, sz.width, sz.height))
}.cgImage!
}
----
====
Armed with the utility function from xref:EXflip[], we can fix our CGImage drawing calls in the previous example so that they draw the halves of Mars the right way up:
----
con.draw(flip(marsLeft!), in:
CGRect(0,0,sz.width/2.0,sz.height))
con.draw(flip(marsRight!), in:
CGRect(sz.width,0,sz.width/2.0,sz.height))
----
////
In the second place, we didn't split the image of Mars in half; we seem to have split it into quarters instead. The reason is that we're using a high-resolution device, and there is a high-resolution variant of our image file.(((screen, high-resolution)))(((resolution)))(((high-resolution, screen))) When we call UIImage's +init(named:)+, we get a UIImage that compensates for the increased size of a high-resolution image by setting its own +scale+ property to match. But a CGImage doesn't have a +scale+ property, and knows nothing of the fact that the image dimensions are increased! Therefore, on a high-resolution device, the CGImage that we extract from our Mars UIImage as `mars.cgImage` is larger (in each dimension) than `mars.size`, and all our calculations after that are wrong.
// When you call a UIImage's +CGImage+ method, therefore, you can't assume that the resulting CGImage is the same size as the original UIImage; a UIImage's +size+ property is the same for a single-resolution image and its double-resolution counterpart, because the +scale+ tells it how to compensate, but the CGImage of a double-resolution UIImage is twice as large in both dimensions as the CGImage of the corresponding single-resolution image.
// skip intermediate example, since I think the last one is the way to go in any case
////
So, in extracting a desired piece of the CGImage, we must either multiply all appropriate values by the scale or express ourselves in terms of the CGImage's dimensions. Here's a version of our original code that draws correctly on a device of any resolution, and compensates for flipping:
----
let mars = UIImage(named:"Mars")!
let sz = mars.size
let marsCG = mars.CGImage
let szCG = CGSizeMake(CGFloat(CGImageGetWidth(marsCG)), CGFloat(CGImageGetHeight(marsCG)))
let marsLeft =
CGImageCreateWithImageInRect(
marsCG, CGRect(0,0,szCG.width/2.0,szCG.height))
let marsRight =
CGImageCreateWithImageInRect(
marsCG, CGRect(szCG.width/2.0,0,szCG.width/2.0,szCG.height))
UIGraphicsBeginImageContextWithOptions(
CGSizeMake(sz.width*1.5, sz.height), false, 0)
// the rest as before, draw each CGImage flipped
let con = UIGraphicsGetCurrentContext()!
CGContextDrawImage(con,
CGRect(0,0,sz.width/2.0,sz.height), flip(marsLeft!))
CGContextDrawImage(con,
CGRect(sz.width,0,sz.width/2.0,sz.height), flip(marsRight!))
let im = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
----
////
The simplest solution, when you drop down to the CGImage world to perform some transmutation, is to wrap the resulting CGImage in a UIImage and draw the UIImage _instead_ of the CGImage. The UIImage can be formed in such a way as to compensate for scale ��� call +init(cgImage:scale:orientation:)+ ��� and by drawing a UIImage instead of a CGImage, we avoid the flipping problem:
----
let mars = UIImage(named:"Mars")!
let sz = mars.size
let marsCG = mars.cgImage!
let szCG = CGSize(CGFloat(marsCG.width), CGFloat(marsCG.height))
let marsLeft =
marsCG.cropping(to:
CGRect(0,0,szCG.width/2.0,szCG.height))
let marsRight =
marsCG.cropping(to:
CGRect(szCG.width/2.0,0,szCG.width/2.0,szCG.height))
let r = UIGraphicsImageRenderer(size:CGSize(sz.width*1.5, sz.height),
format:mars.imageRendererFormat)
let im = r.image { _ in
UIImage(cgImage: marsLeft!,
scale: mars.scale,
orientation: mars.imageOrientation).draw(at:CGPoint(0,0))
UIImage(cgImage: marsRight!,
scale: mars.scale,
orientation: mars.imageOrientation).draw(at:CGPoint(sz.width,0))
}
----
// TIP: Yet another solution to flipping is to apply a transform to the graphics context before drawing the CGImage, effectively flipping the context's internal coordinate system. This is elegant, but can be confusing if there are other transforms in play. I'll talk more about graphics context transforms later in this chapter.
////
.Why Flipping Happens
****
The ultimate source of accidental flipping is that Core Graphics comes from the macOS world, where the coordinate system's origin is located by default at the bottom left and the positive y-direction is upward, whereas on iOS the origin is located by default at the top left and the positive y-direction is downward. In most drawing situations, no problem arises, because the coordinate system of the graphics context is adjusted to compensate. Thus, the default coordinate system for drawing in a Core Graphics context on iOS has the origin at the top left, just as you expect. But creating and drawing a CGImage exposes the ``impedance mismatch'' between the two worlds.
****
////
=== Snapshots
An entire view ��� anything from a single button to your whole interface, complete with its contained hierarchy of views ��� can be drawn into the current graphics context by calling the UIView instance method `drawHierarchy(in:afterScreenUpdates:)`. The result is a _snapshot_ of the original view: it looks like the original view, but it's basically just a bitmap image of it, a lightweight visual duplicate.
TIP: `drawHierarchy(in:afterScreenUpdates:)` is much faster than the CALayer method `render(in:)`; nevertheless, the latter does still come in handy, as I'll show in xref:chap_id18[].
An even faster way to obtain a snapshot of a view is to use the UIView (or UIScreen) instance method +snapshotView(afterScreenUpdates:)+. The result is a UIView, not a UIImage; it's rather like a UIImageView that knows how to draw only one image, namely the snapshot. Such a snapshot view will typically be used as is, but you can enlarge its bounds and the snapshot image will stretch. If you want the stretched snapshot to pass:[behave like] a resizable image, call +resizableSnapshotView(from:afterScreenUpdates:withCapInsets:)+ instead. It is perfectly reasonable to make a snapshot view from a snapshot view.
Snapshots are useful because of the dynamic nature of the iOS interface. You might place a snapshot of a view in your interface in front of the real view to hide what's happening, or use it during an animation to present the illusion of a view moving when in fact it's just a snapshot.(((views, snapshot)))(((snapshot, view)))
Here's an example from one of my apps. It's a card game, and its `views` portray cards. I want to animate the removal of all those cards from the board, flying away to an offscreen point. But I don't want to animate the views themselves! They need to stay put, to portray future cards. So I make a snapshot view of each of the card views; I then make the card views invisible, put the snapshot views in their place, and animate the snapshot views. This code will mean more to you after you've read xref:chap_id17[], but the strategy is evident:
----
for v in views {
let snapshot = v.snapshotView(afterScreenUpdates: false)!
let snap = MySnapBehavior(item:snapshot, snapto:CGPoint(
x: self.anim.referenceView!.bounds.midX,
y: -self.anim.referenceView!.bounds.height)
)
self.snaps.append(snapshot) // keep a list so we can remove them later
snapshot.frame = v.frame
v.isHidden = true
self.anim.referenceView!.addSubview(snapshot)
self.anim.addBehavior(snap)
}
----
// There is another use of `snapshotViewAfterScreenUpdates:` that I make no mention of (because I didn't know about it): you can call it with `true` as your app goes into the background to force a new _system_ snapshot to be used in the app switcher and as the launch image next time around.
// but I tried this and got very weird results - the app was frozen when it returned. So perhaps better not to mention it.
[[SECVignette]]
=== CIFilter and CIImage
The ``CI'' in ((CIFilter)) and ((CIImage)) stands for Core Image, a technology for transforming images through mathematical filters. Core Image started life on the desktop (macOS), and when it was originally migrated into iOS 5, some of the filters available on the desktop were not available in iOS, presumably because they were then too intensive mathematically for a mobile device. Over the years, more and more macOS filters were added to the iOS repertoire, and now the two have complete parity: _all_ macOS filters are available in iOS, and the two platforms have nearly identical APIs.(((Core Image framework)))
A filter is a CIFilter. There are more than 200 available filters; they fall naturally into several broad pass:[categories:]
Patterns and gradients:: These filters create CIImages that can then be combined with other CIImages, such as a single color, a checkerboard, stripes, or a gradient.
Compositing:: These filters combine one image with another, using compositing blend modes familiar from image processing programs.
Color:: These filters adjust or otherwise modify the colors of an image. You can alter an image's saturation, hue, brightness, contrast, gamma and white point, exposure, shadows and highlights, and so on.
Geometric:: These filters perform basic geometric transformations on an image, such as scaling, rotation, and cropping.
Transformation:: These filters distort, blur, or stylize an image.
Transition:: These filters provide a frame of a transition between one image and another; by asking for frames in sequence, you can animate the transition (I'll demonstrate in xref:chap_id17[]).
Special purpose:: These filters perform highly specialized operations such as face detection and generation of barcodes.
A CIFilter is a set of instructions for generating a CIImage ��� the filter's _output image_. Moreover, most CIFilters operate on a CIImage ��� the filter's _input image_. So the output image of one filter can be the input image of another filter. In this way, filters can be _chained_. As you build a chain of filters, nothing actually happens; you're just configuring a sequence of instructions.
If the first CIFilter in the sequence needs an input image, you can get a CIImage from a CGImage with +init(cgImage:)+, or from a UIImage with +init(image:)+. When the last CIFilter in the sequence produces a CIImage, you can transform it into a bitmap drawing ��� a CGImage or a UIImage. In this way, you've transformed an image into another image, using CIImages and CIFilters as intermediaries. The final step, when you generate the bitmap drawing, is called _rendering_ the image. When you render the image, the entire calculation described by the chain of filters is actually performed. Rendering the last CIImage in the sequence is the _only_ calculation-intensive move.
WARNING: A common beginner mistake is trying to obtain a CIImage directly from a UIImage through the UIImage's +ciImage+ property. In general, that's not going to work. That property does not transform a UIImage into a CIImage; it is applicable only to a UIImage that _already_ wraps a CIImage, and most UIImages don't (they wrap a CGImage).
The basic use of a CIFilter is quite simple:
1. Obtain a CIFilter object. You can specify a CIFilter by its string name, by calling +init(name:)+; to learn the names, consult Apple's _Core Image Filter Reference_ in the documentation archive, or call the CIFilter class method +filterNames(inCategories:)+ with a `nil` argument. New in iOS 13, you can obtain a CIFilter object by calling a CIFilter convenience class method named after the string name:
+
----
let filter = CIFilter(name: "CICheckerboardGenerator")!
// or, new in iOS 13:
let filter = CIFilter.checkerboardGenerator()
----
2. A filter has keys and values that determine its behavior. These are its _parameters_. You set them as desired. You can learn about a filter's parameters entirely in code, but typically you'll consult the documentation. To set a parameter, call `setValue(_:forKey:)`. New in iOS 13, you can set a convenience property of the CIFilter:
+
----
filter.setValue(30, forKey: "inputWidth")
// or, new in iOS 13:
filter.width = 30
----
There are several variations on those steps:
* Instead of calling `setValue(_:forKey:)` repeatedly, you can call `setValuesForKeys(_:)` with a dictionary to set multiple parameters at once.
* Instead of obtaining the filter and then setting parameters, you can do both in a single move by calling `init(name:withInputParameters:)`.
* If a CIFilter requires an input CIImage, you can call `applyingFilter(_:parameters:)` on the CIImage to obtain the filter, set its parameters, and receive the output image, in a single move.
Now let's talk about how to render a CIImage. This, as I've said, is the only calculation-intensive move; it can be slow and expensive. There are three main ways:
With a CIContext:: Create a CIContext by calling +init()+ or +init(options:)+; this itself is expensive, so try to make just one CIContext and retain and reuse it. Then call the CIContext's `createCGImage(_:from:)`. The first parameter is the CIImage. The second parameter is a CGRect specifying the region of the CIImage to be rendered. A CIImage does not have a frame or bounds; its CGRect is its `extent`. The output is a CGImage.
With a UIImage:: Create a UIImage wrapping the CIImage by calling +init(ciImage:)+ or +init(ciImage:scale:orientation:)+. You then _draw_ the UIImage into some graphics context; that is what causes the image to be rendered.
With a UIImageView:: This is a shortcut for the preceding approach. Create a UIImage wrapping the CIImage and use it to set a UIImageView's `image`. The display of the image view causes the image to be rendered. In general, this approach works only on a device, though it might work in the simulator in Xcode 11.
// there's some new stuff about rendering in the background in iOS 11, but I don't know whether I need to document it?
TIP: There are other ways of rendering a CIImage that have the advantage of being very fast and suitable for animated or rapid pass:[rendering.] In particular, you could use Metal. But that's outside the scope of this book.
We're ready for an example! I'll start with an ordinary photo of myself (it's true I'm wearing a motorcycle helmet, but it's still ordinary) and create a circular vignette effect (xref:FIGvignette[]).
I'll take advantage of the new iOS 13 convenience methods and properties; to bring these to life, we must `import CoreImage.CIFilterBuiltins`:
[[FIGvignette]]
.A photo of me, vignetted
image::figs/pios_1510.png[]
----
let moi = UIImage(named:"Moi")!
let moici = CIImage(image:moi)! <1>
let moiextent = moici.extent
let smaller = min(moiextent.width, moiextent.height)
let larger = max(moiextent.width, moiextent.height)
// first filter
let grad = CIFilter.radialGradient() <2>
grad.center = moiextent.center
grad.radius0 = Float(smaller)/2.0 * 0.7
grad.radius1 = Float(larger)/2.0
let gradimage = grad.outputImage!
// second filter
let blend = CIFilter.blendWithMask() <3>
blend.inputImage = moici
blend.maskImage = gradimage
let blendimage = blend.outputImage!
----
<1> From the image of me (`moi`), we derive a CIImage (`moici`).
<2> We use a CIFilter (`grad`) to form a radial gradient between the default colors of white and black.
<3> We use a second CIFilter (`blend`) to treat the radial gradient as a mask for blending between the photo of me and a default clear background: where the radial gradient is white (everything inside the gradient's inner radius) we see just me, and where the radial gradient is black (everything outside the gradient's outer radius) we see just the clear color, with a gradation in between, so that the image fades away in the circular band between the gradient's radii.
We have obtained the final CIImage in the chain (`blendimage`), and the processor has not yet performed any rendering. Now we want to generate the final bitmap and display it. Let's say we're going to display it as the +image+ of a UIImageView `self.iv`. I'll demonstrate two of the ways of doing that.
First, the CIContext approach. `self.context` is a property initialized to a CIContext. The starred line is the actual rendering:
----
let moicg = self.context.createCGImage(blendimage, from: moiextent)! // *
self.iv.image = UIImage(cgImage: moicg)
----
Second, the UIImage drawing approach; the starred line is the actual rendering:
----
let r = UIGraphicsImageRenderer(size:moiextent.size)
self.iv.image = r.image { _ in
UIImage(ciImage: blendimage).draw(in:moiextent) // *
}
----
A filter chain can be encapsulated into a single custom filter by subclassing CIFilter. Your subclass just needs to override the `outputImage` property (and possibly other methods such as `setDefaults`), with additional properties to make it key���value pass:[coding] compliant for any input keys. Here's our vignette filter as a simple CIFilter subclass with two input keys; `inputImage` is the image to be vignetted, and `inputPercentage` is a percentage (between 0 and 1) adjusting the gradient's inner radius:(((subclassing, CIFilter)))
----
class MyVignetteFilter : CIFilter {
@objc var inputImage : CIImage?
@objc var inputPercentage : NSNumber? = 1.0
override var outputImage : CIImage? {
return self.makeOutputImage()
}
private func makeOutputImage () -> CIImage? {
guard let inputImage = self.inputImage else {return nil}
guard let inputPercentage = self.inputPercentage else {return nil}
let extent = inputImage.extent
let smaller = min(extent.width, extent.height)
let larger = max(extent.width, extent.height)
let grad = CIFilter.radialGradient()
grad.center = extent.center
grad.radius0 = Float(smaller)/2.0 * inputPercentage.floatValue
grad.radius1 = Float(larger)/2.0
let gradimage = grad.outputImage!
let blend = CIFilter.blendWithMask()
blend.inputImage = self.inputImage
blend.maskImage = gradimage
return blend.outputImage
}
}
----
And here's how to use our CIFilter subclass and display its output in a UIImageView:
----
let vig = MyVignetteFilter()
let moici = CIImage(image: UIImage(named:"Moi")!)!
vig.setValuesForKeys([
"inputImage":moici,
"inputPercentage":0.7
])
let outim = vig.outputImage!
let outimcg = self.context.createCGImage(outim, from: outim.extent)!
self.iv.image = UIImage(cgImage: outimcg)
----
// TIP: You can also create your own CIFilter from scratch ��� not by combining existing filters, but by coding the actual mathematics of the filter. The details are outside the scope of this book; you'll want to look at the ((CIKernel)) class.
// A great place to experiment with CIFilter is in Apple's **** choose Xcode -> Open Developer Tool, and if **** isn't listed (which, by default, it isn't), choose More Developer Tools and download and install the Graphics Tools for Xcode.
// no it isn't, it sucks
CIImage is a powerful class in its own right, with many valuable convenience methods. You can apply a transform to a CIImage, crop it, and even apply a Gaussian blur directly to it. Also, CIImage understands EXIF orientations and can use them to pass:[reorient] itself.
=== Blur and Vibrancy Views
Certain views on iOS, such as navigation bars and the control center, are translucent and display a blurred rendition of what's behind them. You can create similar effects using the UIVisualEffectView class.(((blurred views)))(((views, blurred)))(((vibrancy views)))(((views, vibrancy)))(((UIVisualEffectView)))
A UIVisualEffectView is initialized by calling `init(effect:)`; the parameter is a UIVisualEffect. UIVisualEffect is an abstract superclass; the concrete subclasses are UIBlurEffect and UIVibrancyEffect. You'll use a visual effect view with a blur effect to blur what's behind it; then if you like you can add a visual effect with a vibrancy effect along with subviews. The vibrancy effect view goes inside the blur effect view's `contentView`. Any subviews of the vibrancy effect view go inside its `contentView`, and they will be treated as templates: all that matters is their opacity or transparency, as their color is replaced. Never give a UIVisualEffectView a direct subview!
UIBlurEffect is initialized by calling `init(style:)`. New in iOS 13, the styles are adaptive to light and dark user interface style, and are called _materials._ There are five of them (plus each material has two nonadaptive variants with `Light` or `Dark` appended to the name):(((materials)))(((system materials)))((("mode, light or dark", "blur")))
* `.systemUltraThinMaterial`
* `.systemThinMaterial`
* `.systemMaterial`
* `.systemThickMaterial`
* `.systemChromeMaterial`
UIVibrancyEffect is initialized by calling `init(blurEffect:style:)` (new in iOS 13). The first parameter will be the blur effect of the underlying UIVisualEffectView. The `style:` will be one of these:
* `.label`
* `.secondaryLabel`
* `.tertiaryLabel`
* `.quaternaryLabel`
* `.fill`
* `.secondaryFill`
* `.tertiaryFill`
* `.separator`
Here's an example of a blur effect view covering and blurring the interface (`self.view`), and containing a UILabel wrapped in a vibrancy effect view:
----
let blurEffect = UIBlurEffect(style: .systemThinMaterial)
let blurView = UIVisualEffectView(effect: blurEffect)
blurView.frame = self.view.bounds
blurView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
self.view.addSubview(blurView)
let vibEffect = UIVibrancyEffect(
blurEffect: blurEffect, style: .label)
let vibView = UIVisualEffectView(effect:vibEffect)
let lab = UILabel()
lab.text = "Hello, world!"
lab.sizeToFit()
vibView.bounds = lab.bounds
vibView.center = self.view.bounds.center
vibView.autoresizingMask =
[.flexibleTopMargin, .flexibleBottomMargin,
.flexibleLeftMargin, .flexibleRightMargin]
blurView.contentView.addSubview(vibView)
vibView.contentView.addSubview(lab)
----
xref:FIGblurAndVibrancy[] shows the result in light and dark mode.
[[FIGblurAndVibrancy]]
.A blurred background and a vibrant label
image::figs/pios_1511b.png[]
Both a blur effect view and a blur effect view with an embedded vibrancy effect view are available as Library objects in the nib editor.
=== Drawing a UIView
Most of the examples of drawing so far in this chapter have produced UIImage objects. But, as I've already explained, a UIView itself provides a graphics context; whatever you draw into that graphics context will appear directly in that view.(((drawing, view)))(((views, drawing))) The technique here is to subclass UIView and implement the subclass's `draw(_:)` method. The result is that, from time to time, or whenever you send it the `setNeedsDisplay` message, your view's `draw(_:)` will be called. This is your subclass and your code, so you get to say how this view draws itself at that moment. Whatever drawing you do in `draw(_:)`, that's what the interface will display.
// Let's say we have a UIView subclass called MyView. You would then instantiate this class and get the instance into the view hierarchy. One way to do this would be to drag a UIView from the Library into a view in the nib editor and set its class to MyView in the Identity inspector; another would be to run code that instantiates MyView and puts the instance into the interface.
When you override `draw(_:)`, there will usually be no need to call +super+, since UIView's own implementation of `draw(_:)` does nothing. At the time that `draw(_:)` is called, the current graphics context has already been set to the view's own graphics context. You can use Core Graphics functions or UIKit convenience methods to draw into that context. I gave some basic examples earlier in this chapter (xref:SECgraphicscontexts[]).
The need to draw in real time, on demand, surprises some beginners, who worry that drawing may be a time-consuming operation. This can indeed be a reasonable consideration, and where the same drawing will be used in many places in your interface, it may make sense to construct a UIImage instead, once, and then reuse that UIImage by drawing it in a view's `draw(_:)`.
In general, though, you should not optimize prematurely. The code for a drawing operation may appear verbose and yet be extremely fast. Moreover, the iOS drawing system is efficient; it doesn't call `draw(_:)` unless it has to (or is told to, through a call to +setNeedsDisplay+), and once a view has drawn itself, the result is cached so that the cached drawing can be reused instead of repeating the drawing operation from scratch. (Apple refers to this cached drawing as the view's _bitmap backing store_.) You can readily satisfy yourself of this fact with some caveman debugging, logging in your `draw(_:)` implementation; you may be amazed to discover that your custom UIView's `draw(_:)` code is called only once in the entire lifetime of the app!
In fact, moving code to `draw(_:)` is commonly a way to _increase_ efficiency. This is because it is more efficient for the drawing engine to render directly onto the screen than for it to render offscreen and then copy those pixels onto the screen.
Here are three important caveats with regard to UIView's `draw(_:)` method:
* Don't call `draw(_:)` yourself. If a view needs updating and you want its `draw(_:)` called, send the view the +setNeedsDisplay+ message. This will cause `draw(_:)` to be called at the next proper moment.
* Don't override `draw(_:)` unless you are assured that this is legal. It is not legal to override `draw(_:)` in a subclass of UIImageView, for instance; you cannot combine your drawing with that of the UIImageView.
* Don't do anything in `draw(_:)` except draw. That sort of thing is a common beginner mistake. Other configurations, such as setting the view's background color, or adding subviews or sublayers, should be performed elsewhere, such as its initializer override.
Where drawing is extensive and can be compartmentalized into sections, you may be able to gain some additional efficiency by paying attention to the parameter passed into `draw(_:)`. This parameter is a CGRect designating the region of the view's bounds that needs refreshing. Normally, this is the view's entire bounds; but if you call `setNeedsDisplay(_:)`, which takes a CGRect parameter, it will be the CGRect that you passed in as argument. You could respond by drawing only what goes into those bounds; but even if you don't, your drawing will be clipped to those bounds, so, while you may not spend less time drawing, the system will draw more efficiently.
// Can remove this! iOS 12 provides a transparent view graphics context by default even when both those things are the case!
// No, not so fast. I discovered that we still get a black view if we draw in certain ways!!! for example:
////
----
let con = UIGraphicsGetCurrentContext()!
con.setFillColor(UIColor.blue.cgColor)
con.fill(CGRect(0,0,0,0))
// but the context is transparent if we comment out the next line
con.setFillColor(UIColor.red.cgColor)
con.fill(CGRect(0,0,0,0))
----
////
// Okay, I've got a theory. If we draw in one color, maybe that can be expressed by some sort of keyed color. But if we draw in two colors, it can't, and for some reason then we get the automatic opaque graphics context.
When a custom UIView subclass has a `draw(_:)` implementation and you create an instance of this subclass in code, you may be surprised (and annoyed) to find that the view has a black background! This is a source of considerable confusion among beginners. The black background arises particularly when two things are true:(((views, transparency)))(((views, black background)))(((background, black)))(((views, opaque)))
* The view's +backgroundColor+ is `nil`.
* The view's +isOpaque+ is `true`.
When a UIView is created in code with `init(frame:)`, by default both those things _are_ true. If this issue arises for you and you want to get rid of the black background, override +init(frame:)+ and have the view set its own +isOpaque+ to `false`:
----
class MyView : UIView {
override init(frame: CGRect) {
super.init(frame:frame)
self.isOpaque = false
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}
----
With a UIView created in the nib, on the other hand, the black background problem doesn't arise. This is because the UIView's +backgroundColor+ is not `nil`. The nib assigns it _some_ actual background color, even if that color is +UIColor.clear+.
=== Graphics Context Commands
Whenever you draw, you are giving commands to the graphics context into which you are drawing. This is true regardless of whether you use UIKit methods or Core Graphics functions. Learning to draw is really a matter of understanding how a graphics context works. That's what this section is about.(((graphics context, drawing into)))
Under the hood, Core Graphics commands to a graphics context are global C functions with names like `CGContextSetFillColor`; but Swift ``renamification'' recasts them as if a CGContext were a genuine object representing the graphics context, with the Core Graphics functions appearing as methods of the CGContext. Moreover, thanks to Swift overloading, multiple functions are collapsed into a single command; for example, `CGContextSetFillColor` and `CGContextSetFillColorWithColor` and `CGContextSetRGBFillColor` and `CGContextSetGrayFillColor` all become the same command, +setFillColor+.
==== Graphics Context Settings
As you draw in a graphics context, the drawing obeys the context's current settings. For this reason, the procedure is always to configure the context's settings first, and then draw. To draw a red line and then a blue line, you would first set the context's line color to red, and draw the first line; then you'd set the context's line color to blue, and draw the second line. To the eye, it appears that the redness and blueness are properties of the individual lines, but in fact, at the time you draw each line, line color is a feature of the entire graphics context. (((graphics context, state)))(((state, graphics context)))pass:none[]
A graphics context has, at every moment, a _state_, which is the sum total of all its current settings; the way a piece of drawing looks is the result of what the graphics context's state was at the moment that piece of drawing was performed. To help you manipulate entire states, the graphics context provides a _stack_ for holding states. Every time you call +saveGState+, the context pushes the current state onto the stack; every time you call +restoreGState+, the context retrieves the state from the top of the stack (the state that was most recently pushed) and sets itself to that state. A common pattern is:
1. Call +saveGState+.
2. Manipulate the context's settings, changing its state.
3. Draw.
4. Call +restoreGState+ to restore the state and the settings to what they were before you manipulated them.
You do not have to do this before _every_ manipulation of a context's settings, because settings don't necessarily conflict with one another or with past settings. You can set the context's line color to red and then later to blue without any difficulty. But in certain situations you do want your manipulation of settings to be undoable, and I'll point out several such situations later in this chapter.
Many of the settings that constitute a graphics context's state, and that determine the behavior and appearance of drawing performed at that moment, are similar to those of any drawing application. Here are some of them, along with some of the commands that determine them (and some UIKit properties and methods that call them):
Line thickness and dash style::
`setLineWidth(_:)`, `setLineDash(phase:lengths:)`
pass:[
]UIBezierPath `lineWidth`, `setLineDash(_:count:phase:)`
Line end-cap style and join style:: `setLineCap(_:)`, `setLineJoin(_:)`, `setMiterLimit(_:)`
pass:[
]UIBezierPath `lineCapStyle`, `lineJoinStyle`, `miterLimit`
Line color or pattern:: `setStrokeColor(_:)`, `setStrokePattern(_:colorComponents:)`
pass:[
]UIColor `setStroke`
Fill color or pattern:: `setFillColor(_:)`, `setFillPattern(_:colorComponents:)`
pass:[
]UIColor `setFill`
Shadow:: `setShadow(offset:blur:color:)`
Overall transparency and compositing:: `setAlpha(_:)`, `setBlendMode(_:)`
// (and UIBezierPath `strokeWithBlendMode:alpha:`, `fillWithBlendMode:alpha:`)
////
Text features:: `CGContextSelectFont`, `CGContextSetFont`, `CGContextSetFontSize`, `CGContextSetTextDrawingMode`, `CGContextSetCharacterSpacing`
////
Anti-aliasing:: `setShouldAntialias(_:)`
// , `CGContextSetShouldSmoothFonts`
Additional settings include:
Clipping area:: Drawing outside the clipping area is not physically drawn.
Transform (or ``CTM,'' for ``current transform matrix''):: Changes how points that you specify in subsequent drawing commands are mapped onto the physical space of the canvas.
Many of these settings will be illustrated by examples later in this chapter.
==== Paths and Shapes
By issuing a series of instructions for moving an imaginary pen, you construct a _path_, tracing it out from point to point. You must first tell the pen where to position itself, setting the current point; after that, you issue commands telling the pen how to trace out each subsequent piece of the path, one by one. Each new piece of the path starts by default at the current point; its end becomes the new current point.
A path can be compound, meaning that it consists of multiple independent pieces. A single path might consist of two separate closed shapes: say, a rectangle and a circle. When you call `move(to:)` in the _middle_ of constructing a path, you pick up the imaginary pen and move it to a new location without tracing a segment, preparing to start an independent piece of the same path.(((compound paths)))
If you're worried, as you begin to trace out a path, that there might be an existing path and that your new path might be seen as a compound part of that existing path, you can call `beginPath` to specify that this is a different path; many of Apple's examples do this, but in practice I usually do not find it necessary.
Here are some path-drawing commands you're likely to give:
Position the current point:: `move(to:)`
Trace a line:: `addLine(to:)`, `addLines(between:)`
Trace a rectangle:: `addRect(_:)`, `addRects(_:)`
Trace an ellipse or circle:: `addEllipse(in:)`
Trace an arc:: `addArc(tangent1End:tangent2End:radius:)`
Trace a Bezier curve with one or two control points:: `addQuadCurve(to:control:)`, `addCurveTo(to:control1:control2:)`
Close the current path:: `closePath`. This appends a line from the last point of the path to the first point. There's no need to do this if you're about to fill the path, since it's done for you.
Note that a path, in and of itself, does _not_ constitute drawing! First you provide a path; _then_ you draw. Drawing can mean stroking the path or filling the path, or both. Again, this should be a familiar notion from certain drawing applications.(((drawing, path)))(((paths))) The important thing is that stroking or filling a path _clears the path._ That path is now gone and we're ready to begin constructing a new path if desired:
Stroke or fill the current path (and clear the path):: `strokePath`, `fillPath(using:)`, `drawPath`. Use `drawPath` if you want both to fill and to stroke the path in a single command, because if you merely stroke it first with `strokePath`, the path is cleared and you can no longer fill it. There are also some convenience functions that create a path from a CGRect or similar and stroke or fill it, in a single move:
* `stroke(_:)`, `strokeLineSegments(between:)`
* `fill(_:)`
* `strokeEllipse(in:)`
* `fillEllipse(in:)`
If a path needs to be reused or shared, you can encapsulate it as a ((CGPath)). Like CGContext, CGPath and its mutable partner CGMutablePath are treated as class types under ``renamification,'' and the global C functions that manipulate them are treated as methods. You can copy the graphics context's current path using the CGContext `path` method, or you can create a new CGMutablePath and construct the path using various functions, such as `move(to:transform:)` and `addLine(to:transform:)`, that parallel the CGContext path-construction functions. Also, there are ways to create a path based on simple geometry or on an existing path:
// There's a new applyWithBlock that takes a block whose parameter is a CGPathElement, but I don't talk about CGPathElement so it isn't worth mentioning
* +init(rect:transform:)+
* +init(ellipseIn:transform:)+
* +init(roundedRect:cornerWidth:cornerHeight:transform:)+
* +copy(strokingWithWidth:lineCap:lineJoin:miterLimit:transform:)+
* +copy(dashingWithPhase:lengths:transform:)+
* +copy(using:)+ (takes a pointer to a CGAffineTransform)
To illustrate the typical use of path-drawing commands, I'll generate the up-pointing arrow shown in xref:FIGuparrow[]. This might not be the best way to create the arrow, and I'm deliberately avoiding use of the convenience functions, but it's clear and shows a nice basic variety of typical commands:
----
// obtain the current graphics context
let con = UIGraphicsGetCurrentContext()!
// draw a black (by default) vertical line, the shaft of the arrow
con.move(to:CGPoint(100, 100))
con.addLine(to:CGPoint(100, 19))
con.setLineWidth(20)
con.strokePath()
// draw a red triangle, the point of the arrow
con.setFillColor(UIColor.red.cgColor)
con.move(to:CGPoint(80, 25))
con.addLine(to:CGPoint(100, 0))
con.addLine(to:CGPoint(120, 25))
con.fillPath()
// snip a triangle out of the shaft by drawing in Clear blend mode
con.move(to:CGPoint(90, 101))
con.addLine(to:CGPoint(100, 90))
con.addLine(to:CGPoint(110, 101))
con.setBlendMode(.clear)
con.fillPath()
----
[[FIGuparrow]]
.A simple path drawing
image::figs/pios_1513.png[]
// Properly speaking, we should probably surround our drawing code with calls to `CGContextSaveGState` and `CGContextRestoreGState`, just in case. It probably wouldn't make any difference in this particular example, as the context does not persist between calls to `draw(_:)`, but it can't hurt.
The UIKit class ((UIBezierPath)) is actually a wrapper for CGPath; the wrapped path is its +cgPath+ property. It provides methods parallel to the CGContext and CGPath functions for constructing a path, such as:
// [role="pagebreak-before"]
* +init(rect:)+
* +init(ovalIn:)+
* +init(roundedRect:cornerRadius:)+
* +move(to:)+
* +addLine(to:)+
* +addArc(withCenter:radius:startAngle:endAngle:clockwise:)+
* +addQuadCurve(to:controlPoint:)+
* +addCurve(to:controlPoint1:controlPoint2:)+
* +close+
When you call the UIBezierPath instance methods +fill+ or +stroke+ or +fill(with:alpha:)+ or +stroke(with:alpha:)+, the current graphics context settings are saved, the wrapped CGPath is made the current graphics context's path and stroked or filled, and the current graphics context settings are restored.
Using UIBezierPath together with UIColor, we could rewrite our arrow-drawing routine entirely with UIKit methods:
----
let p = UIBezierPath()
// shaft
p.move(to:CGPoint(100,100))
p.addLine(to:CGPoint(100, 19))
p.lineWidth = 20
p.stroke()
// point
UIColor.red.set()
p.removeAllPoints()
p.move(to:CGPoint(80,25))
p.addLine(to:CGPoint(100, 0))
p.addLine(to:CGPoint(120, 25))
p.fill()
// snip
p.removeAllPoints()
p.move(to:CGPoint(90,101))
p.addLine(to:CGPoint(100, 90))
p.addLine(to:CGPoint(110, 101))
p.fill(with:.clear, alpha:1.0)
----
There's no savings of code here over calling Core Graphics functions, so your choice of Core Graphics or UIKit is a matter of taste.
// UIBezierPath is also useful when you want to capture a CGPath and pass it around as an object; an example appears in xref:chap_id34[]. See also the discussion in xref:chap_id16[] of CAShapeLayer, which takes a CGPath that you've constructed and draws it for you within its own bounds.
==== Clipping
A path can be used to mask out areas, protecting them from future drawing. This is called _clipping_. By default, a graphics context's clipping region is the entire graphics context, meaning that you can draw anywhere within the context.(((graphics context, clipping region)))(((clipping)))
The clipping area is a feature of the context as a whole, and any new clipping area is applied by intersecting it with the existing clipping area. To restore your clipping area to the default, call `resetClip`.
// resetClip has only just appeared in the documentation, but tests reveal that it works all the way back to iOS 8 which is far as I can test, so I presume it has always been there and just wasn't documented
To illustrate, I'll rewrite the code that generated our original arrow (xref:FIGuparrow[]) to use clipping instead of a blend mode to ``punch out'' the triangular notch in the tail of the arrow. This is a little tricky, because what we want to clip to is not the region inside the triangle but the region outside it. To express this, we'll use a compound path consisting of more than one closed area ��� the triangle, and the drawing area as a whole (which we can obtain as the context's `boundingBoxOfClipPath`).
Both when filling a compound path and when using it to express a clipping region, the system follows one of two rules:
Winding rule:: The fill or clipping area is denoted by an alternation in the direction (clockwise or counterclockwise) of the path demarcating each region.
Even-odd rule (EO):: The fill or clipping area is denoted by a simple count of the paths demarcating each region.
Our situation is extremely simple, so it's easier to use the even-odd rule:
----
// obtain the current graphics context
let con = UIGraphicsGetCurrentContext()!
// punch triangular hole in context clipping region
con.move(to:CGPoint(90, 100))
con.addLine(to:CGPoint(100, 90))
con.addLine(to:CGPoint(110, 100))
con.closePath()
con.addRect(con.boundingBoxOfClipPath)
con.clip(using:.evenOdd)
// draw the vertical line
con.move(to:CGPoint(100, 100))
con.addLine(to:CGPoint(100, 19))
con.setLineWidth(20)
con.strokePath()
// draw the red triangle, the point of the arrow
con.setFillColor(UIColor.red.cgColor)
con.move(to:CGPoint(80, 25))
con.addLine(to:CGPoint(100, 0))
con.addLine(to:CGPoint(120, 25))
con.fillPath()
----
The UIBezierPath clipping commands are +usesEvenOddFillRule+ and +addClip+.
.How Big Is My Context?
****
At first blush, it appears that there's no way to learn a graphics context's size. Typically, this doesn't matter, because either you created the graphics context or it's the graphics context of some object whose size you know, such as a UIView. But in fact, because the default clipping region of a graphics context is the entire context, you can use `boundingBoxOfClipPath` to learn the context's ``bounds.''(((graphics context, size)))
****
==== Gradients
Gradients can range from the simple to the complex. A simple gradient (which is all I'll describe here) is determined by a color at one endpoint along with a color at the other endpoint, plus (optionally) colors at intermediate points; the gradient is then painted either linearly between two points or radially between two circles. You can't use a gradient as a path's fill color, but you can restrict a gradient to a path's shape by clipping, which will sometimes be good enough.(((gradients)))(((CGGradient)))
To illustrate, I'll redraw our arrow, using a linear gradient as the ``shaft'' of the arrow (xref:FIGuparrowGradient[]):
----
// obtain the current graphics context
let con = UIGraphicsGetCurrentContext()!
// punch triangular hole in context clipping region
con.move(to:CGPoint(10, 100))
con.addLine(to:CGPoint(20, 90))
con.addLine(to:CGPoint(30, 100))
con.closePath()
con.addRect(con.boundingBoxOfClipPath)
con.clip(using: .evenOdd)
// draw the vertical line, add its shape to the clipping region
con.move(to:CGPoint(20, 100))
con.addLine(to:CGPoint(20, 19))
con.setLineWidth(20)
con.replacePathWithStrokedPath()
con.clip()
// draw the gradient
let locs : [CGFloat] = [ 0.0, 0.5, 1.0 ]
let colors : [CGFloat] = [
0.8, 0.4, // starting color, transparent light gray
0.1, 0.5, // intermediate color, darker less transparent gray
0.8, 0.4, // ending color, transparent light gray
]
let sp = CGColorSpaceCreateDeviceGray()
let grad = CGGradient(
colorSpace:sp, colorComponents: colors, locations: locs, count: 3)!
con.drawLinearGradient(grad,
start: CGPoint(89,0), end: CGPoint(111,0), options:[])
con.resetClip() // done clipping
// draw the red triangle, the point of the arrow
con.setFillColor(UIColor.red.cgColor)
con.move(to:CGPoint(80, 25))
con.addLine(to:CGPoint(100, 0))
con.addLine(to:CGPoint(120, 25))
con.fillPath()
----
[[FIGuparrowGradient]]
.Drawing with a gradient
image::figs/pios_1514.png[]
The call to `replacePathWithStrokedPath` pretends to stroke the current path, using the current line width and other line-related context state settings, but then creates a new path representing the outside of that stroked path. Instead of a thick line we now have a rectangular region that we can use as the clip region.
We then create the gradient and paint it. The procedure is verbose but simple; everything is boilerplate. We describe the gradient as an array of locations on the continuum between one endpoint (+0.0+) and the other endpoint (+1.0+), along with the color components of the colors corresponding to each location; in this case, I want the gradient to be lighter at the edges and darker in the middle, so I use three locations, with the dark one at +0.5+. We must also supply a color space; this will tell the gradient how to interpret our color components. Finally, we create the gradient and paint it into place.
(See also the discussion of gradient CIFilters earlier in this chapter. For yet another way to create a simple gradient, see the discussion of CAGradientLayer in the next chapter.)
==== Colors and Patterns
A color is a ((CGColor)). CGColor is not difficult to work with, and can be converted to and from a UIColor through UIColor's +init(cgColor:)+ and its +cgColor+ property.
New in iOS 13, `drawRect(_:)` is called when the user interface style (light or dark) changes, and `UITraitCollection.current` is set for you, so any dynamic UIColors you use while drawing will be correct for the current interface style. But there's no such thing as a dynamic CGColor, so if you're using CGColor in some other situation, you might need to trigger a redraw manually. For an example, see xref:SECInterfaceStyle[].((("mode, light or dark", "colors")))(((color, dynamic)))(((dynamic, color)))
A pattern is also a kind of color. You can create a pattern color and stroke or fill with it. The simplest way is to draw a minimal tile of the pattern into a UIImage and create the color by calling UIColor's `init(patternImage:)`. To illustrate, I'll create a pattern of horizontal stripes and use it to paint the point of the arrow instead of a solid red color (xref:FIGuparrowStripes[]):(((color, pattern)))
----
// create the pattern image tile
let r = UIGraphicsImageRenderer(size:CGSize(4,4))
let stripes = r.image { ctx in
let imcon = ctx.cgContext
imcon.setFillColor(UIColor.red.cgColor)
imcon.fill(CGRect(0,0,4,4))
imcon.setFillColor(UIColor.blue.cgColor)
imcon.fill(CGRect(0,0,4,2))
}
// paint the point of the arrow with it
let stripesPattern = UIColor(patternImage:stripes)
stripesPattern.setFill()
let p = UIBezierPath()
p.move(to:CGPoint(80,25))
p.addLine(to:CGPoint(100,0))
p.addLine(to:CGPoint(120,25))
p.fill()
----
[[FIGuparrowStripes]]
.A patterned fill
image::figs/pios_1515.png[]
The Core Graphics equivalent, ((CGPattern)), is considerably more powerful, but also much more elaborate:(((patterns)))
----
con.saveGState()
let sp2 = CGColorSpace(patternBaseSpace:nil)!
con.setFillColorSpace(sp2)
let drawStripes : CGPatternDrawPatternCallback = { _, con in
con.setFillColor(UIColor.red.cgColor)
con.fill(CGRect(0,0,4,4))
con.setFillColor(UIColor.blue.cgColor)
con.fill(CGRect(0,0,4,2))
}
var callbacks = CGPatternCallbacks(
version: 0, drawPattern: drawStripes, releaseInfo: nil)
let patt = CGPattern(info:nil, bounds: CGRect(0,0,4,4),
matrix: .identity,
xStep: 4, yStep: 4,
tiling: .constantSpacingMinimalDistortion,
isColored: true, callbacks: &callbacks)!
var alph : CGFloat = 1.0
con.setFillPattern(patt, colorComponents: &alph)
con.move(to:CGPoint(80, 25))
con.addLine(to:CGPoint(100, 0))
con.addLine(to:CGPoint(120, 25))
con.fillPath()
con.restoreGState()
----
To understand that code, it helps to read it backward. Everything revolves around the creation of `patt` using the CGPattern initializer. A pattern is a drawing in a rectangular ``cell''; we have to state both the size of the cell (`bounds:`) and the spacing between origin points of cells (`xStep:`, `yStep:`). In this case, the cell is 4��4, and every cell exactly touches its neighbors both horizontally and vertically. We have to supply a transform to be applied to the cell (`matrix:`); in this case, we're not doing anything with this transform, so we supply the identity transform. We supply a tiling rule (`tiling:`). We have to state whether this is a color pattern or a stencil pattern; it's a color pattern, so `isColored:` is +true+. And we have to supply a pointer to a callback function that actually draws the pattern into its cell (`callbacks:`).
Except that that's _not_ what we have to supply as the `callbacks:` argument. What we actually have to supply here is a pointer to a CGPatternCallbacks struct. This struct consists of a `version:` whose value is fixed at +0+, along with pointers to _two_ functions, the `drawPattern:` to draw the pattern into its cell, and the `releaseInfo:` called when the pattern is released. We're not specifying the second function here; it is for memory management, and we don't need it in this simple example.
As you can see, the actual pattern-drawing function (`drawStripes`) is very simple. The only tricky issue is that it must agree with the CGPattern as to the size of a cell, or the pattern won't come out the way you expect. We know in this case that the cell is 4��4. So we fill it with red, and then fill its lower half with blue. When these cells are tiled touching each other horizontally and vertically, we get the stripes that you see in xref:FIGuparrowStripes[].
Having generated the CGPattern, we call the context's `setFillPattern`; instead of setting a fill color, we're setting a fill pattern, to be used the next time we fill a path (in this case, the triangular arrowhead). The `colorComponents:` parameter is a pointer to a CGFloat, so we have to set up the CGFloat itself beforehand.
The only thing left to explain is the first three lines of our code. It turns out that before you can call `setFillPattern` with a colored pattern, you have to set the context's fill color space to a pattern color space. If you neglect to do this, you'll get an error when you call `setFillPattern`. This means that the code as presented has left the graphics context in an undesirable state, with its fill color space set to a pattern color space. This would cause trouble if we were later to try to set the fill color to a normal color. The solution is to wrap the code in calls to `saveGState` and `restoreGState`.
You may have observed in xref:FIGuparrowStripes[] that the stripes do not fit neatly inside the triangle of the arrowhead: the bottommost stripe is something like half a blue stripe. This is because a pattern is positioned not with respect to the shape you are filling (or stroking), but with respect to the graphics context as a whole. We could shift the pattern position by calling `setPatternPhase` before drawing.
==== Graphics Context Transforms
Just as a UIView can have a ((transform)), so can a graphics context. Applying a transform to a graphics context has no effect on the drawing that's already in it; like other graphics context settings, it affects only the drawing that takes place after it is applied, altering the way the coordinates you provide are mapped onto the graphics context's area. A graphics context's transform is called its _CTM_, for ``current transform matrix.''(((CTM)))
It is quite usual to take full advantage of a graphics context's CTM to save yourself from performing even simple calculations. You can multiply the current transform by any ((CGAffineTransform)) using `concatCTM`; there are also convenience functions for applying a translate, scale, or rotate transform to the current transform.
The base transform for a graphics context is already set for you when you obtain the context; that's how the system is able to map context drawing coordinates onto screen coordinates. Whatever transforms you apply are applied to the current transform, so the base transform remains in effect and drawing continues to work. You can return to the base transform after applying your own transforms by wrapping your code in calls to `saveGState` and `restoreGState`.
Here's an example. We have hitherto been drawing our upward-pointing arrow with code that knows how to place that arrow at only one location: the top left of its rectangle is hard-coded at +(80,0)+. This is silly. It makes the code hard to understand, as well as inflexible and difficult to reuse. Surely the sensible thing would be to draw the arrow at +(0,0)+, by subtracting 80 from all the x-values in our existing code. Now it is easy to draw the arrow at _any_ position, simply by applying a translate transform beforehand, mapping +(0,0)+ to the desired top-left corner of the arrow. To draw it at +(80,0)+, we would say:
----
con.translateBy(x:80, y:0)
// now draw the arrow at (0,0)
----
A rotate transform is particularly useful, allowing you to draw in a rotated orientation without any nasty trigonometry. It's a bit tricky because the point around which the rotation takes place is the origin. This is rarely what you want, so you have to apply a translate transform first, to map the origin to the point around which you really want to rotate. But then, after rotating, in order to figure out where to draw, you will probably have to reverse your translate transform.(((rotation, drawing)))(((drawing, rotated)))
To illustrate, here's code to draw our arrow repeatedly at several angles, pivoting around the end of its tail (xref:FIGuparrowRotate[]). Since the arrow will be drawn multiple times, I'll start by encapsulating the drawing of the arrow as a UIImage. This is not merely to reduce repetition and make drawing more efficient; it's also because we want the entire arrow to pivot, including the pattern stripes, and this is the simplest way to achieve that:
----
lazy var arrow : UIImage = {
let r = UIGraphicsImageRenderer(size:CGSize(40,100))
return r.image { _ in
self.arrowImage()
}
}()
func arrowImage () {
// obtain the current graphics context
let con = UIGraphicsGetCurrentContext()!
// draw the arrow into the graphics context
// draw it at (0,0)! adjust all x-values by subtracting 80
// ... actual code omitted ...
}
----
In our `draw(_:)` implementation, we draw the arrow image multiple times:
----
override func draw(_ rect: CGRect) {
let con = UIGraphicsGetCurrentContext()!
self.arrow.draw(at:CGPoint(0,0))
for _ in 0..<3 {
con.translateBy(x: 20, y: 100)
con.rotate(by: 30 * .pi/180.0)
con.translateBy(x: -20, y: -100)
self.arrow.draw(at:CGPoint(0,0))
}
}
----
[[FIGuparrowRotate]]
.Drawing rotated
image::figs/pios_1516.png[]
// boring and confusing, we've already forgotten about that
////
A transform is also one more solution for the ``flip'' problem we encountered earlier with `CGContextDrawImage`. Instead of reversing the drawing, we can reverse the context into which we draw it. Essentially, we apply a ``flip'' transform to the context's coordinate system. You move the context's top downward, and then reverse the direction of the pass:[y-coordinate] by applying a scale transform whose y-multiplier is +-1+:(((flipping)))
----
CGContextTranslateCTM(con, 0, theHeight)
CGContextScaleCTM(con, 1.0, -1.0)
----
How far down you move the context's top (`theHeight`) depends on how you intend to draw the image.
////
==== Shadows
To add a shadow to a drawing, give the context a shadow value before drawing. The shadow position is expressed as a CGSize, where the positive direction for both values indicates down and to the right. The blur value is an open-ended positive number; Apple doesn't explain how the scale works, but experimentation shows that 12 is nice and blurry, 99 is so blurry as to be shapeless, and higher values become problematic.(((shadows)))
xref:FIGuparrowShadow[] shows the result of the same code that generated xref:FIGuparrowRotate[], except that before we start drawing the arrow repeatedly, we give the context a shadow:
----
let con = UIGraphicsGetCurrentContext()!
con.setShadow(offset: CGSize(7, 7), blur: 12)
self.arrow.draw(at:CGPoint(0,0))
// ... and so on
----
[[FIGuparrowShadow]]
.Drawing with a shadow
image::figs/pios_1517.png[]
It may not be evident from xref:FIGuparrowShadow[], but we are adding a shadow each time we draw. This means the arrows are able to cast shadows on one another. Suppose, instead, that we want all the arrows to cast a single shadow collectively. The way to achieve this is with a _transparency layer_; this is basically a subcontext that accumulates all drawing and then adds the shadow. Our code for drawing the shadowed arrows now looks like this:(((transparency, layer)))(((layers, transparency)))
----
let con = UIGraphicsGetCurrentContext()!
con.setShadow(offset: CGSize(7, 7), blur: 12)
con.beginTransparencyLayer(auxiliaryInfo: nil)
self.arrow.draw(at:CGPoint(0,0))
for _ in 0..<3 {
con.translateBy(x: 20, y: 100)
con.rotate(by: 30 * .pi/180.0)
con.translateBy(x: -20, y: -100)
self.arrow.draw(at:CGPoint(0,0))
}
con.endTransparencyLayer()
----
==== Erasing
The CGContext `clear(_:)` function erases all existing drawing in a CGRect; combined with clipping, it can erase an area of any shape. The result can ``punch a hole'' through all existing drawing.(((clear)))
The behavior of `clear(_:)` depends on whether the context is transparent or opaque. This is particularly obvious and intuitive when drawing into an image context. If the image context is transparent, `clear(_:)` erases to transparent; otherwise it erases to black.
When drawing directly into a view, if the view's background color is `nil` or a color with even a tiny bit of transparency, the result of `clear(_:)` will appear to be transparent, punching a hole right through the view including its background color; if the background color is completely opaque, the result of `clear(_:)` will be black. This is because the view's background color determines whether the view's graphics context is transparent or opaque, so this is essentially the same behavior that I described in the preceding paragraph.(((views, transparency)))(((views, black background)))(((background, black)))(((graphics context, opaque)))(((opaque, graphics context)))
xref:FIGclearRect[] illustrates; the blue square on the left has been partly cut away to black, while the blue square on the right has been partly cut away to transparency. Yet these are instances of the same UIView subclass, drawn with exactly the same code! The UIView subclass's `draw(_:)` looks like this:
----
let con = UIGraphicsGetCurrentContext()!
con.setFillColor(UIColor.blue.cgColor)
con.fill(rect)
con.clear(CGRect(0,0,30,30))
----
[[FIGclearRect]]
.The very strange behavior of the clear function
image::figs/pios_1512.png[]
The difference between the views in xref:FIGclearRect[] is that the +backgroundColor+ of the first view is solid red with an alpha of +1+, while the +backgroundColor+ of the second view is solid red with an alpha of +0.99+. This difference is imperceptible to the eye ��� not to mention that the red color never appears, as it is covered with a blue fill! Nevertheless, it completely changes the effect of `clear(_:)`.
If you find this as confusing as I do, the simplest solution may be to drop down to the level of the view's `layer` and set its `isOpaque` property after setting the view's background color:
----
self.backgroundColor = .red
self.layer.isOpaque = false
----
That gives you a final and dependable say on the behavior of `clear(_:)`. If `layer.isOpaque` is `false`, `clear(_:)` erases to transparency; if it is `true`, it erases to black.
////
.Automatic Color Depth
****
Starting in iOS 12, the color depth of a graphics context is determined automatically by what is drawn into the context. The idea is to save memory; if no color is drawn into the graphics context, the context's pixels don't need to accommodate color information.
In general, this shouldn't make any difference to your code, but I've encountered some situations, such as when drawing with a pattern color, where color drawing appears as grayscale, evidently because the graphics context is not expanding its color depth automatically from grayscale to color.
// Here are some possible workarounds if you encounter this issue. One approach
A possible workaround is to draw a zero-size color fill (`con` is the current graphics context):
----
con.setFillColor(UIColor.blue.cgColor)
con.fill(CGRect(0,0,0,0))
----
// Another approach, if this is a UIGraphicsImageRenderer context, is to create the renderer with a UIGraphicsImageRendererFormat whose `preferredRange` is set explicitly to `.standard` (or `.extended`).
****
////
=== Points and Pixels
A point is a dimensionless location described by an x-coordinate and a y-coordinate. When you draw in a graphics context, you specify the points at which to draw, and this works regardless of the device's resolution, because Core Graphics maps your drawing nicely onto the physical output using the base CTM and anti-aliasing. Therefore, throughout this chapter I've concerned myself with graphics context points, disregarding their relationship to screen pixels.(((pixels, vs. points)))
Nonetheless, pixels do exist. A pixel is a physical, integral, dimensioned unit of display in the real world. Whole-numbered points effectively lie between pixels, and this can matter if you're fussy, especially on a single-resolution device. If a vertical path with whole-number coordinates is stroked with a line width of 1, half the line falls on each side of the path, and the drawn line on the screen of a single-resolution device will seem to be 2 pixels wide (because the device can't illuminate half a pixel).
You may sometimes encounter the suggestion that if this effect is objectionable, you should try shifting the line's position by +0.5+, to center it in its pixels. This advice may appear to work, but it makes some simpleminded assumptions. A more sophisticated approach is to obtain the UIView's +contentScaleFactor+ property. You can divide by this value to convert from pixels to points. Consider also that the most accurate way to draw a vertical or horizontal line is not to stroke a path but to fill a rectangle. This UIView subclass code will draw a perfect 1-pixel-wide vertical line on any device (`con` is the current graphics context): pass:none[]
----
con.fill(CGRect(100,0,1.0/self.contentScaleFactor,100))
----
=== Content Mode
A view that draws something within itself, as opposed to merely having a background color and subviews (as in the previous chapter), has _content_. This means that its +contentMode+ property becomes important whenever the view is resized. As I mentioned earlier, the drawing system will avoid asking a view to redraw itself from scratch if possible; instead, it will use the cached result of the previous drawing operation (the bitmap backing store). If the view is resized, the system may simply stretch or shrink or reposition the cached drawing, if your +contentMode+ setting instructs it to do so.(((views, content mode)))
It's a little tricky to illustrate this point when the view's content is coming from `draw(_:)`, because I have to arrange for the view to obtain its content from `draw(_:)` and then cause it to be resized without `draw(_:)` being called _again_. As the app starts up, I'll create an instance of a UIView subclass, MyView, that knows how to draw our arrow; then I'll use delayed performance to resize the instance after the window has shown and the interface has been initially displayed (for my `delay` function, see xref:appb[]):
----
delay(0.1) {
mv.bounds.size.height *= 2 // mv is the MyView instance
}
----
We double the height of the view without causing `draw(_:)` to be called. The result is that the view's drawing appears at double its correct height. If our view's `draw(_:)` code is the same as the code that generated xref:FIGuparrowGradient[], we get xref:FIGstretched[].
[[FIGstretched]]
.Automatic stretching of content
image::figs/pios_1518.png[]
Sooner or later, however, `draw(_:)` will be called, and the drawing will be refreshed in accordance with our code. Our code doesn't say to draw the arrow at a height that is relative to the height of the view's bounds; it draws the arrow at a fixed height. Therefore, the arrow will snap back to its original size.
// If a view is to be resized only _momentarily_ ��� say, as part of an animation ��� then stretching behavior might be exactly what you want. Suppose we're going to animate the view by making it get a little larger for a moment and then returning it to its original size, perhaps as a way of attracting the user's attention. Then presumably we do want the view's content to stretch and shrink as the view stretches and shrinks; that's the whole point of the animation. This is precisely what the default +contentMode+ value, +UIViewContentModeScaleToFill+, does for us. And remember, it does it efficiently; what's being stretched and shrunk is just a cached image of our view's content. A view's content is actually its layer's content, and I'll have much more to say about that in the next chapter.
A view's +contentMode+ property should therefore usually be in agreement with how the view draws itself. Our `draw(_:)` code dictates the size and position of the arrow relative to the view's bounds origin, its top left; so we could set its +contentMode+ to +.topLeft+. Alternatively, we could set it to +.redraw+; this will cause automatic scaling of the cached content to be turned off ��� instead, when the view is resized, its +setNeedsDisplay+ method will be called, ultimately triggering `draw(_:)` to redraw the content. pass:none[]
////
what on earth does clearsContextBeforeDrawing do? I have not found any situation where changing this makes any difference to what is drawn
////