React Native Expo automated pipeline

Expo is a framework and a platform for building native iOS and Android using React-Native. Expo.dev is a hosted platform for building, deploying and publishing iOS and Android apps.

This is my pipeline setup with GitHub Actions and Expo.

Ingredients #

Expo and EAS #

The Expo build service is called EAS. You get 30 free builds per month (as of today), which is more than enough for weekly releases.

EAS has two types of jobs for each platform: Build and Submit. The build step packages the app up into a format accepted by the respective provider, and the submit step actually uploads the build to the provider for beta testing.

You can view the builds and submissions in the web UI, but all the pipeline triggering happens in GitHub Actions. Expo has its own automation, but as usual there are always edge cases that need extra attention and are hard to cover in a managed service.

GitHub Actions #

This is the main pipeline. It runs once a week on the main branch for both iOS and Android, and can be triggered manually anytime for either or both.

.github/workflows/release.yaml:

name: Release
on:
  workflow_dispatch:
    inputs:
      platform:
        type: choice
        description: "Platform to release to"
        options:
          - ios
          - android
          - all
  schedule:
    - cron: "30 2 * * 3"

jobs:
  release:
    name: ๐Ÿ“ฑ build and submit mobile
    runs-on: ubuntu-latest
    steps:
      - name: ๐Ÿ’ป Get Code
        uses: actions/checkout@v3
        with:
          fetch-depth: 0 # Important for accessing the complete commit history

      - name: ๐Ÿ”ง Prepare environment
        uses: actions/setup-node@v3
        with:
          node-version: 22.x

      - name: ๐Ÿงพ Update app.json with new version
        run: |
          python prepare-app-json-for-build.py
          echo "Updated app.json"
          cat app.json

      - name: ๐Ÿ— Setup EAS
        uses: expo/expo-github-action@v8
        with:
          eas-version: latest
          token: ${{ secrets.EXPO_API_ACCESS_TOKEN }}

      - name: ๐Ÿ“ฆ Install dependencies
        run: yarn install

      - name: ๐Ÿ—๏ธ Prepare Play Console service account key
        run: echo ${{ secrets.GOOGLE_PLAY_SERVICE_ACCOUNT_BASE64 }} | base64 -d > play_console_service_account_key.json

      - name: ๐Ÿค– Build Android
        if: inputs.platform == 'android' || inputs.platform == 'all' || github.event_name == 'schedule'
        continue-on-error: true
        run: ./eas-build.py android

      - name: ๐Ÿ“ฑ Build iOS
        if: inputs.platform == 'ios' || inputs.platform == 'all' || github.event_name == 'schedule'
        continue-on-error: true
        run: ./eas-build.py ios

What does prepare-app-json-for-build.py do?

It updates the app.json file with the new build number. We have to do that since neither Apple nor Google will accept a new build with the same version and build number as a previous build. We still control the version number in code (in the app.json file), while the build number is managed by the CI/CD pipeline using this Python script.

prepare-app-json-for-build.py:

#!/usr/bin/env python3
import json
import subprocess
import sys

def get_total_commits() -> int:
    """Returns the total number of commits in the current Git repository."""
    return int(subprocess.check_output(["git", "rev-list", "--count", "HEAD"]).decode().strip())

def get_android_version_code(build_number: int) -> int:
    """Returns the Android version code"""
    # Android requires version codes to be unique integers
    # We add 10000 as a base to avoid conflicts with legacy builds
    return 10000 + build_number

def update_app_json():
    """Updates the app.json file with the new build number."""

    # Fetch total commits for build number
    build_number = get_total_commits()

    if build_number == 1:
        print("Build number cannot be 1")
        sys.exit(1)

    # Read the existing app.json file
    try:
        with open('app.json', 'r') as file:
            data = json.load(file)
    except Exception as e:
        print(f"Error reading app.json: {e}")
        sys.exit(1)

    if 'expo' not in data:
        print("expo key not found in app.json.")
        sys.exit(1)
    if 'android' not in data['expo']:
        print("android key not found in app.json.")
        sys.exit(1)
    if 'versionCode' not in data['expo']['android']:
        print("versionCode key not found in app.json.")
        sys.exit(1)
    if 'ios' not in data['expo']:
        print("ios key not found in app.json.")
        sys.exit(1)
    if 'buildNumber' not in data['expo']['ios']:
        print("buildNumber key not found in app.json.")
        sys.exit(1)
    if 'version' not in data['expo']:
        print("version key not found in app.json.")
        sys.exit(1)

    android_version_code = get_android_version_code(build_number)

    # Update the app.json data
    data['expo']['android']['versionCode'] = str(android_version_code)
    data['expo']['ios']['buildNumber'] = str(build_number)

    # Write the updated data back to app.json
    try:
        with open('app.json', 'w') as file:
            json.dump(data, file, indent=2)
            print("app.json has been updated successfully.")
    except Exception as e:
        print(f"Error writing app.json: {e}")
        sys.exit(1)

if __name__ == "__main__":
    update_app_json()

But what happens if the build for iOS was run manually on Tuesday, when the automated build runs on Wednesday?

Excellent question! The problem here is again that the providers will reject builds with the same build version as a previous build. This means manual runs could interfere with the automated runs, and the submission step would fail as Apple/Google rejects it due to build number collision.

This is where the eas-build.py script comes in. It checks if there's already a build for the current version and handles the conflict gracefully.

If there is already a build in EAS with the same build number, we do not submit anything to the provider (Apple or Google).

Does that mean you skip the build entirely?

No. We could do that, but that might cause other problems down the line. Imagine that there are no commits for a few weeks, or even months. Then suddenly there's a critical bug that needs fixing, we jump on it, get a fix together and submit a new build. But since the last successful build was weeks or months ago, some new dependency or other change outside of our control could mean the pipeline fails. There could be multiple failures that have accumulated over time, and they now block the release. Now we have to sit down and try to understand the pipeline again, and stay up all night trying to fix it, before we can ship the bug fix!

So instead of skipping the build, we meet in the middle: run the build step, but don't submit it. This keeps the pipeline warm and alerts if the build breaks for some other reason than our code change, without firing off needless submissions that fail.

Here is eas-build.py:

#!/usr/bin/env python3
import json
import subprocess
import sys

def get_last_successful_build_date(platform: str) -> str:
    """Returns the date of the last successful build from EAS."""
    try:
        # Get the last successful build info from EAS
        result = subprocess.check_output(
            ["eas", "build:list", "--non-interactive", "--json", "--limit", "1", "--platform", platform]
        ).decode().strip()
        builds = json.loads(result)

        if builds and len(builds) > 0:
            status = builds[0].get("status")
            if status == "IN_PROGRESS":
                raise RuntimeError(f"Last {platform} build is still in progress")
            if status == "IN_QUEUE":
                raise RuntimeError(f"Last {platform} build is still in queue")
            if status == "PENDING_CANCEL":
                raise RuntimeError(f"Last {platform} build is pending cancel")
            if status == "NEW":
                raise RuntimeError(f"Last {platform} build is new")

            # The completedAt field contains the build completion timestamp
            return builds[0].get("completedAt")

        return None
    except (subprocess.CalledProcessError, json.JSONDecodeError, KeyError) as e:
        print(f"Error getting last {platform} build date: {e}")
        return None

def has_new_commits_since_last_successful_build(platform: str) -> bool:
    """Returns True if there are new commits since the last successful build."""
    last_build_date = get_last_successful_build_date(platform)
    if not last_build_date:
        # If we can't determine last build date, assume there are changes
        print(f"Could not determine last {platform} build date, assuming changes needed")
        return True

    result = subprocess.check_output(
        ["git", "log", f"--since='{last_build_date}'", "--oneline"],
    ).decode().strip()
    has_changes = bool(result)
    if not has_changes:
        print(f"No new commits since last successful {platform} build ({last_build_date})")
    return has_changes

def build_platform(platform: str) -> None:
    """Execute the appropriate build command based on whether there are changes."""
    base_command = ["eas", "build", "--non-interactive", "--no-wait", "--platform", platform]

    if has_new_commits_since_last_successful_build(platform):
        # Build and submit to store
        command = base_command + ["--auto-submit"]
        print(f"Building and submitting {platform} app to store...")
    else:
        # Build only (keep pipeline warm)
        command = base_command
        print(f"Building {platform} app without submitting (just to keep pipeline warm)...")

    try:
        subprocess.run(command, check=True)
        print(f"Successfully initiated {platform} build")
    except subprocess.CalledProcessError as e:
        print(f"Error during {platform} build: {e}")
        sys.exit(1)

if __name__ == "__main__":
    if len(sys.argv) != 2 or sys.argv[1] not in ["ios", "android"]:
        print("Usage: python eas-build.py <ios|android>")
        sys.exit(1)

    platform = sys.argv[1]
    build_platform(platform)

About scheduled builds #

We could just trigger this manually, but there is something rather useful about builds that run on a schedule, for a number of reasons:

Our perception of time is warped to say the least, and having a trusty machine tick away every week is a good way to make sure you remember to ship.

The schedule establishes a habit for you, in that you know every Wednesday at 02:30 AM the pipeline will run and build the app. If you have some bug fixes or new features to get out, your mind will naturally start planning based on the schedule. You have given yourself a weekly, artificial deadline.

And as mentioned before, it means that the pipeline will run even if nobody has pushed any new commits to the repo, catching build errors we might otherwise miss.

Apple review times #

Apple reviews are notoriously arbitrary, in many regards but in particular in the time it takes to review a new build. One way to help avoid getting stuck behind the review is after you get a new version approved (say 1.0.0) to immediately bump the version (to 1.0.1) and run the iOS pipeline to submit it to TestFlight. That way you get over the first hurdle of the initial review for beta testing, which can sometimes take longer than the actual publishing review (the last step before hitting the App Store). After the first beta review, subsequent builds do not require approval until you decide to publish.

Conclusion #

This automated pipeline setup provides several benefits:

The combination of GitHub Actions and Expo makes for a reliable and maintainable mobile app deployment process that works well for both scheduled and on-demand releases.

Contact me #

Anything missing from this post? What problems do you have with your mobile app deployment pipeline? What do you think is the most annoying part of the process? I'd love to hear from you!

You can contact me at dev.blog@jonatan.blue.