Abiodun Solomon I’m a software developer that is curious about modern technologies. I love contributing to the growth of knowledge for the betterment of humanity.

A quick guide to optimizing Laravel apps with Octane

3 min read 1059

Optimize Laravel Octane

The Laravel team announced the release of Laravel Octane in 2021, its purpose being to improve the speed and performance of Laravel applications by reducing the request/response time by caching an instance of a Laravel dependency container in RAM memory. This process is done by tools called Swoole and RoadRunner.

In this post, we will be conducting a quick-start guide on how to optimize your Laravel apps with Octane, taking into consideration a benchmark analysis to demonstrate the performance differences between RoadRunner, Swoole, and Nginx.

Jump ahead:

Explanation of tools

  • Swoole: Swoole is a PHP extension that helps with low-level or traditional stateless models such as event loops and async to improve the performance of PHP. Swoole tends to be popular, due to the fact that it’s a PHP extension, compared with roadrunner, which is built with Go
  • RoadRunner: RoadRunner is a high-performance PHP application server, load-balancer, and process manager written in Go — it’s a binary app that needs to be installed before use
  • AutoCannon: AutoCannon is an HTTP benchmarking tool written in Node.js. It’s used for evaluating the performance of web applications

Why Laravel Octane?

Laravel Octane is a package that serves Laravel apps with Swoole or RoadRunner to help improve performance.

Conventional Laravel apps are served from webservers like Apache, Nginx, and Lighttpd, which with every request spawns a PHP-FPM worker. This approach causes an overhead for creating processes and booting Laravel apps on every request, which is referred to as a stateless approach, as none of PHP processes are re-used on each request.

While Swoole and RoadRunner still use worker processes for all requests, they only serve the first request to boot the framework (dependency containers), and any other ones are from the bootstrapped version of the framework.

Pros of Octane

  • It supercharges the performance of your Laravel apps
  • It conserves resources compared to conventional Laravel apps

Challenges of Octane

  • Code changes can be a challenge, since Octane caches your app in the memory and a change in your code may not be seen after a browser refresh, unless Octane is running or on Watch mode
  • Since the app is running in memory, memory leaks can be another challenge to look into, as all data is being stored in the memory — particularly static and global variables

Setting up a Laravel Octane app

Now that we’ve gone over the details of what comprises Octane and what it does, let’s get started with putting it into action.

Here’s how to begin:

❯ composer create-project laravel/laravel laravel-octane


❯ composer require laravel/octane


❯ php artisan octane:install


 Which application server you would like to use?:
  [0] roadrunner
  [1] swoole
 > 0

Installing application servers

RoadRunner or Swoole is needed to serve your application since they are both external packages, but most times RoadRunner will be installed after app server selection. If it is not, use the command below to install it manually:

composer require spiral/roadrunner

Installing Swoole can be a bit different, as it’s a PHP extension and not a package and requires few procedures. The following command is used to install it and get started with setup procedures:

pecl install swoole

(Note: We won’t cover the procedures for installing and setting up Swoole with PHP, but here’s a simple guide to do it)

Start your app with the following command, then navigate to your browser.

 php artisan octane:start

   INFO  Server running…

  Local: http://127.0.0.1:8000

You can also specify which technology/server to use and how many threads to run, based on your environment’s CPU threads, as shown below.

php artisan octane:start --workers=4 --server=roadrunner

Benchmarking app servers with AutoCannon

The project used in this post is a simple page with a little populated data. We will be changing the app server (RoadRunner, Swoole, and Nginx) at every stage of the test, so as to evaluate and compare the performance of each server by using AutoCannon.

This process will help you in making a decision on which app server is most suitable for your project.

AutoCannon is capable of generating lots of traffic, even when running on a single multi-core CPU; we will be running a benchmark for 10 seconds with 100 concurrent connections, 10 pipeline connections, and 3 worker threads to fire requests.

Swoole

❯ autocannon http://127.0.0.1:8000 -d 10 -w 3 -c 100 -p 10
Running 10s test @ http://127.0.0.1:8000
100 connections with 10 pipelining factor
3 workers

/
┌─────────┬────────┬─────────┬─────────┬─────────┬────────────┬───────────┬─────────┐
│ Stat    │ 2.5%   │ 50%     │ 97.5%   │ 99%     │ Avg        │ Stdev     │ Max     │
├─────────┼────────┼─────────┼─────────┼─────────┼────────────┼───────────┼─────────┤
│ Latency │ 201 ms │ 1773 ms │ 3175 ms │ 3304 ms │ 1854.07 ms │ 657.15 ms │ 4201 ms │
└─────────┴────────┴─────────┴─────────┴─────────┴────────────┴───────────┴─────────┘
┌───────────┬─────┬──────┬─────────┬─────────┬─────────┬─────────┬─────────┐
│ Stat      │ 1%  │ 2.5% │ 50%     │ 97.5%   │ Avg     │ Stdev   │ Min     │
├───────────┼─────┼──────┼─────────┼─────────┼─────────┼─────────┼─────────┤
│ Req/Sec   │ 0   │ 0    │ 503     │ 576     │ 475.3   │ 166.71  │ 440     │
├───────────┼─────┼──────┼─────────┼─────────┼─────────┼─────────┼─────────┤
│ Bytes/Sec │ 0 B │ 0 B  │ 4.13 MB │ 4.73 MB │ 3.91 MB │ 1.37 MB │ 3.62 MB │
└───────────┴─────┴──────┴─────────┴─────────┴─────────┴─────────┴─────────┘

Req/Bytes counts sampled once per second.
# of samples: 30

6k requests in 10.02s, 39.1 MB read

Roadrunner

❯ autocannon http://127.0.0.1:8000 -d 10 -w 3 -c 100 -p 10
Running 10s test @ http://127.0.0.1:8000
100 connections with 10 pipelining factor
3 workers

-
┌─────────┬────────┬─────────┬─────────┬─────────┬────────────┬───────────┬─────────┐
│ Stat    │ 2.5%   │ 50%     │ 97.5%   │ 99%     │ Avg        │ Stdev     │ Max     │
├─────────┼────────┼─────────┼─────────┼─────────┼────────────┼───────────┼─────────┤
│ Latency │ 119 ms │ 1692 ms │ 2314 ms │ 2587 ms │ 1617.82 ms │ 574.62 ms │ 3153 ms │
└─────────┴────────┴─────────┴─────────┴─────────┴────────────┴───────────┴─────────┘
┌───────────┬─────────┬─────────┬─────────┬─────────┬─────────┬─────────┬─────────┐
│ Stat      │ 1%      │ 2.5%    │ 50%     │ 97.5%   │ Avg     │ Stdev   │ Min     │
├───────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┤
│ Req/Sec   │ 366     │ 366     │ 544     │ 861     │ 546.3   │ 124.68  │ 366     │
├───────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┤
│ Bytes/Sec │ 3.01 MB │ 3.01 MB │ 4.47 MB │ 7.08 MB │ 4.49 MB │ 1.02 MB │ 3.01 MB │
└───────────┴─────────┴─────────┴─────────┴─────────┴─────────┴─────────┴─────────┘

Req/Bytes counts sampled once per second.
# of samples: 30

6k requests in 10.02s, 44.9 MB read

To benchmark the app via Nginx, we need to setup Laravel Valet , and then proceed to run the same command; but in this case, we use 127.0.0.1, because it runs on port 80.

❯ autocannon http://127.0.0.1 -d 10 -w 3 -c 100 -p 10
Running 10s test @ http://127.0.0.1
100 connections with 10 pipelining factor
3 workers

/
┌─────────┬────────┬────────┬────────┬────────┬───────────┬─────────┬────────┐
│ Stat    │ 2.5%   │ 50%    │ 97.5%  │ 99%    │ Avg       │ Stdev   │ Max    │
├─────────┼────────┼────────┼────────┼────────┼───────────┼─────────┼────────┤
│ Latency │ 111 ms │ 169 ms │ 202 ms │ 235 ms │ 166.22 ms │ 23.1 ms │ 290 ms │
└─────────┴────────┴────────┴────────┴────────┴───────────┴─────────┴────────┘
┌───────────┬─────────┬─────────┬─────────┬─────────┬─────────┬────────┬─────────┐
│ Stat      │ 1%      │ 2.5%    │ 50%     │ 97.5%   │ Avg     │ Stdev  │ Min     │
├───────────┼─────────┼─────────┼─────────┼─────────┼─────────┼────────┼─────────┤
│ Req/Sec   │ 4551    │ 4551    │ 5691    │ 6343    │ 5718.8  │ 464.3  │ 4548    │
├───────────┼─────────┼─────────┼─────────┼─────────┼─────────┼────────┼─────────┤
│ Bytes/Sec │ 2.13 MB │ 2.13 MB │ 2.67 MB │ 2.98 MB │ 2.68 MB │ 218 kB │ 2.13 MB │
└───────────┴─────────┴─────────┴─────────┴─────────┴─────────┴────────┴─────────┘

Req/Bytes counts sampled once per second.
# of samples: 32

0 2xx responses, 62950 non 2xx responses
64k requests in 10.01s, 29.5 MB read

Conclusion

According to the bechmark analysis, you will notice that Nginx ran a total number of 64k requests, which is a lot more than the requests made by both Swoole and RoadRunner, which is around 12k, or 6k each.



In conclusion, I would consider Swoole and RoadRunner a better option in this scenario, but that doesn’t necessarily mean Nginx or other web servers such as Apache and lighttpd should not be considered, as they are still used by many to serve millions of websites with large concurrent users and are still great choices.

Thanks for reading, let me know your own opinion about Laravel Octane (or other web servers) and which you like using in your own projects.

Get setup with LogRocket's modern error tracking in minutes:

  1. Visit https://logrocket.com/signup/ to get an app ID.
  2. Install LogRocket via NPM or script tag. LogRocket.init() must be called client-side, not server-side.
  3. $ npm i --save logrocket 

    // Code:

    import LogRocket from 'logrocket';
    LogRocket.init('app/id');
    Add to your HTML:

    <script src="https://cdn.lr-ingest.com/LogRocket.min.js"></script>
    <script>window.LogRocket && window.LogRocket.init('app/id');</script>
  4. (Optional) Install plugins for deeper integrations with your stack:
    • Redux middleware
    • ngrx middleware
    • Vuex plugin
Get started now
Abiodun Solomon I’m a software developer that is curious about modern technologies. I love contributing to the growth of knowledge for the betterment of humanity.

Leave a Reply