Apple's method for dealing with backwards compatibility is simultaneously the best thing ever, and a huge pain in the ass.
Basically, when compiling an app, we set both the Base SDK and the Deployment Target. The Base SDK should be set to the highest version available. This gives us full access to all the new APIs (as well as any bug fixes and performance improvements). The Deployment Target, on the other hand, should be set for the lowest version of the OS we intend to support.
So, let's say I build an application that should run on all iOS 3.0 devices and later. I set the Base SDK to 4.2 (my current non-beta maximum) and the Deployment Target to 3.0. When I compile my application, the following happens:
This lets us compile an applicaiton with iOS 4.2 features, but still run it on an iOS 3.0 device. Of course, if we actually call any of the iOS 4.2 features on our 3.0 device, the app will crash--so we have to check and runtime and make sure we only call code that the device actually supports.
This is great. I can conditionally mix in iAD (for example) into an application, but continue to support existing, older clients. I don't have to compile multiple versions of my app.
However, there is one huge problem. There is no way to check the code at compile time and make sure we aren't accidentally calling code above the Deployment Target in a generally reachable branch.
Things aren't so bad when you're just adding a new feature to an existing app. You know that the code you're adding is above the Deployment Target, and you can take care to isolate it properly. The real problem comes in when you're doing general coding or debugging. It's so easy to unwittingly use parts of the SDK that are above the deployment target--especially when using autocomplete.
And, there's only one way to catch these errors--testing. Sure, I like testing. I'm a huge fan of testing, but testing alone is not a sufficient solution for this problem. Here's the hard, cold truth: It's very hard (I would say impractical if not impossible) to make sure that we actually execute every possible branch of our code during our test cycles. And if we don't test every branch, we don't know if we're really safe or not. Above-Deployment API calls can be scattered throughout any part of our code, laying in wait like little land mines, just waiting to kill our apps.
To me, this really feels like something the compiler or static analyzer needs to address. After all, the compiler (or at least, the linker) should already know about the SDK differences. After all, it uses these differences to determine what should or should not be weak linked. Shouldn't it also be able to pass this information back to us?
In the past, we could at least strip out the advanced features and reduce the Base SDK to the Deployment Target. Once we got a clean build, we could then increase the Base SDK and re-insert our above-deployment features. However, Apple no longer provides earlier versions of the SDK with Xcode. It makes a certain amount of sense. If they provided copies of earlier SDKs, people would use those to build and submit their apps--and Apple really wants everyone to migrate to the newer SDKs.
I also wonder if Apple's subtly encouraging us to stop supporting old devices. After all, by most reports 90-95% of all iOS users have already migrated to some version of iOS 4.0. Given the extra testing time and developer stress needed when supporting iOS 3.x devices, I have to wonder if it even makes sense to try.
I actually spoke with a couple of Apple Engineers about this at WWDC. They agreed that it's a big problem, and suggested that it might be possible to find old copies of the SDK and install those manually. That sounds like a lot of effort--especially since there's no guarantee that it will even work. And, even if it does work, it only provides the most primitive sort of support.
No, after seeing some of the other stuff Apple has done with LLVM and Clang, I sincerely hope they add more support for detecting above-Deployment API calls at compile time.