A simple, responsive feature per plan table using CSS Grids

For a long time I was extremely unhappy with the unresponsiveness of the features list in Tideways. The landingpage is using Bootstrap 3 since the beginning and its table-responsive class is supposed to help here, but requires visitors to scroll and loose context. See a screenshot here of the mobile version of our features table.

https://beberlei.de/_static/cssgrid1.png

For a large table I can’t see which feature and plan a cell belongs to while scrolling around. At the size of Tideways feature table is relatively useless in my opinion.

I came across the Webflow Pricing Page a few days ago and got inspired to redesign the page to use CSS Grids and Sticky Positioning. It took me a while to get around the CSS to understand the concepts and then started from scratch to try to come up with a bare bones solution.

In this blog post I try to explain the solution in my own words. I am by no means a CSS expert, so take all the explanations with a grain of salt. I am linking as many Mozilla Developer docs as possible for reference.

First, I want to use an ordered list for semantic reasons. I then need to group the feature name and its availability in different plans by splitting each list-item into several cells.

<section class="features">
   <ol>
      <li class="header">
         <div>Features</div>
         <div>Free</div>
         <div>Pro</div>
      </li>
      <li>
         <div>Simple A</div>
         <div>Yes</div>
         <div>Yes</div>
      </li>
      <li>
         <div>Fancy B</div>
         <div>No</div>
         <div>Yes</div>
      </li>
   </ol>
</section>

First we look at the style of the list item:

section.features ol li {
   /* hide the ordered list item numbers */
   list-style-type: none;

   /* set element to grid mode rendering */
   display: grid;

   /* grid has 3 columns with a pre-defined width */
   grid-template-columns: 50% 25% 25%;
}

The magic here is the grid-template-columns directive that can be thought of similar to defining the number and width of table columns.

Next we modify the .header class such that it always scrolls to the top of the screen as long as the whole features table is visible on the screen using the sticky position.

section.features ol li.header {
   position: sticky;

   /* this must be modified if you have a static top navigation for example */
   top: 0px;

   /* hide feature cells when header is "over" them */
   background-color: #fff;

   /* some styling to make the header stand out a little from the features */
   border-bottom: 1px solid #666;
   font-weight: bold;
}

Lastly we align all text to center in all divs:

section.features ol li div {
    text-align: center;
}

This makes the feature table work nicely on desktop browsers. The white background is necessary to that the features that scroll under the header will not be visible anymore.

Now to the responsive part, we use the CSS grid to change the three column row into two rows, with the feature label spanning the size of both cells that indiciate feature availability per plan.

@media(max-width: 672px) {
    section.features ol li {
        /* redefine the grid to have only two columns */
        grid-template-columns: 50% 50%;
        /* define two template "rows per grid" (this is my murky understanding) */
        grid-template-rows: auto auto;
    }

    section.features ol li div:nth-child(1) {
            /* define first div (cell) to be 3 columns wide and span a whole row */
            grid-column-start: 1;
            grid-column-end: 3;
            grid-row-start: 1;
            grid-row-end: 2;

            border-bottom: 1px solid #000;
        }
    }
}

The magic is in the grid-column-start (Mozilla Docs) and grid-column-end directives that sort of act like colspan in tables. In addition the possibility to change a grid from one to two rows just with CSS does the rest of the trick here.

You can see a full code example of this blog posts feature table and the re-designed Tideways feature table in action.

https://beberlei.de/_static/cssgrid3.png

Let me know if there are mistakes in my CSS or ways to simplify even further by contacting me under kontakt@beberlei.de.

More about: CSS

P++ is a bad idea for non-technical reasons

Last week the idea of changing PHP to include two languages “PHP” (Classic) and “P++” was proposed on the internals mailing list by Zeev and in more detail by a FAQ answering questions. For context, Zeev is one of the original authors of the Zend Engine in PHP 3 and 4 and co-founder of the Zend company, so the proposal got a lot of responses and additional discussions on Reddit and HackerNews.

The goal of the propsal is find a way to evolve the language PHP using a new dialect (P++) and stay backwards compatible by continuing to support the old dialect (PHP).

Zeev proposes a new tag would <?p++ that sets the PHP compiler into a different mode than <?php now does, providing a “clean start” with BC breaks and more strict typing syntax.

tl;dr: The proposal for P++ is at the core a proposal to have the PHP runtime support multiple versions of the PHP language at the same time. Other languages already have this distinction for ages (C with 89, 99, 11, …). By going with a language version number instead of a new language name, we can avoid a lot of non-technical issues with P++.

I will start with a few non-technical arguments why I think this is a bad idea to introduce a distinct language called “P++” or any other name, with its own name and brand:

  • From a “governing” perspective, introducing P++ is like a big bang that would force the community on a path without knowing all the implementation details up front. This goes against the current governing model of PHP where each larger technical decision is made democratically using the RFC process. At this point we have to respect and accept the fact that without a benelovant dictator, you cannot make these big bang changes in an open source project anymore. Improvements have to be made in incremental steps and P++ would not fit into this model.
  • From an evolutionary perspective, the premise that the PHP community and internal teams can design a new language from the ivory tower, and get the details right the first time is pretence of knowledge fallacy. It is much more likely mistakes are made and then in 5 years we are back with the same problem. The P++ proposal sounds like a perpetual experimental version. It would be better to find a long term strategy to cope with incremental change to the language instead of a big bang change every 10 years.
  • From a marketing perspective, introducing a new brand “P++” is going to be extremely hard to bring to the market. With “the PHP company” Zend swallowed by larger companies, there is no company with the primary goal of bringing forward the language anymore. PHP is truely a community effort now, without even a foundation. There is no centralized body that can effectively lead the marketing effort for this new P++ brand. We are not in 1979 anymore when C++ was invented, the language market is highly fought for and we as the PHP community are protected by PHPs enormous market share that we should not give up by fragmenting.
  • I recognize “P++” is just a working name right now, a name without special characters is certainly a better idea. But a name different from PHP introduces even more problems w.r.t to SEO/Google and the way the PHP project is organized right now there isn’t even a good process defined that would lead to a great outcome.
  • From a documentation perspective, one of PHPs unique selling points is its awesome docs living on “php.net”. As both dialects PHP and P++ would run on the same engine, it becomes much harder to represent this on the website. Here the argument that the P++ project is feasible even with few internal developers falls apart. It would require a completly overhauled new website, an approach to represent both dialects sufficiently without confusing users, new mailing lists, new everything.
  • From a documentation perspective, assuming P++ were to break BC on Core APIs compared to PHP. Would php.net/strpos show the PHP and the P++ function body with haystack and needle switched? Or Would we need to copy the entire documentation? This would be a huge documentation team effort whose time hasn’t been accounted for by the P++ FAQ/proposal.
  • From a teaching perspective, Code examples in the wild on blogs, mailing lists and other sources often would need to make an extra effort to target either PHP, P++ or both. Knowledge would become clustered into two groups.
  • From an ecosystem perspective, a second name/brand would complicate everything for third party vendors, conferences, magazines. Examples: “PHPStorm, the lightning smart PHP & P++ IDE”, “Xdebug - Debugger and Profiler for PHP and P++”, “Dutch PHP and P++ conference”, “PHP and P++ Magazine”. We would probably need to introduce another name for the runtime, say PVM, to allow to make a precise distiction. This adds even more confusion.
  • From a SEO perspeective, Google and other search engines are a primary tool for software developers. If PHP and P++ now start fragmenting the communtiy it becomes much harder for developers to find solutions to problems, because “PHP sort array” will not find the articles “P++ sort array” that offer the same solution.
  • A long time ago, PHP was described to me as the Borg of programming languages. Assimilating APIs, features, paradigms from everywhere. This is still a very good analogy. And today it supports even more paradigms than 15 years ago and gives users extreme freedom to choose between dynamic or strict typing. This has been done in a way with as few BC breaks as possible. Python 3 and Perl 6 are examples of languages that made it much much harder for users to upgrade. I don’t see why suddenly now this approach is not possible anymore and requires two separate dialects.

The P++ proposal makes a few analogys to arrive at the P++ idea, but they are both flawed in my opinion:

  • The analogy that P++ is to PHP what C++ is to C is wrong. C++ introduced a completly new paradigm (object oriented programming). P++ as proposed is PHP with some BC breaks. Its more comparable to Python 2 to 3.
  • The analogy that P++ is to PHP what ES6 is to ES5 is wrong. ES6 and ES5 are versions like PHP 5 and PHP 7 are. EcmaScript is much better in not breaking backwards compatibility than PHP is, but the language by design makes this easier. You can still write Javascript with just ES5 syntax on every ES6 and ES7 compiler. The same is true of PHP 7, where you can still write code that would also run on PHP 3, PHP 4 and PHP 5.

With my arugments I have hopefully established enough non-technical arguments why separating PHP into two separate dialects is not a good idea.

An Alternative Approach

But what are the alternatives to evolve the PHP language?

PHP could avoid all the non-technical problems that P++ would introduce by going with an approach like C, C++, ECMAScript or Rust have: Define different versions of the language that the Runtime/Compiler can all support. Currently PHP combines runtime and language and upgrading to PHP 7 runtime requires you to update your code to PHP 7 semantics.

In C you specify to the compiler according to which version of the standard the file should be compiled.:

gcc -std=c89 file.c
gcc -std=c99 file.c

And then you can combine their output to a new binary which includes code compiled with both versions.

Rust has a smiliar concept named editions. In ECMAscript you use a third party compiler (like Babel) to compile one version down into another.

Essentially the proposed semantics of P++ boil down to defining a new version of PHPs language, they don’t warrant a new language.

If we allow the PHP runtime to support several standards at the same time we can avoid fragmentation of the community, avoiding all the non-technical issues listed above.

PHP already uses declare for this kind of decisions at the moment, so it would be natural to introduce a construct to PHP and make it responsible for switching the Compiler between different versions. Example with made up option name and version:

<?php declare(std=20);

This could be defined to automatically include strict_types=1, but also include some cleanup to type juggling rules for example. The sky is the limit. If we improve the language for the next version, we can introduce the next standard version, but the compiler could still support the old ones for a few years.

PHP users could upgrade to the latest version of the PHP runtime, get security patches, bugfixes, and performance improvements, but can keep the semantics of the version their software was written against. This would simplify the process of keeping backwards compatibility.

Deciding on the actual naming and syntax would be a minor technical problem.

The JIT in relation to PHP extensions

A few days ago I posted about Playing with the PHP JIT and included some simple benchmarking with the react-php-redis server project, which involves a lot of parsing but is ultimately still bound by I/O even when running async.

I got some questions on Twitter that are around some misconcetions of what the JIT really an do for PHP applications and what it cannot do.

So to show what the JIT is good for, I wanted to have truly CPU bound problem that was realistic from my POV.

Inside Tideways we use a datatype called HDRHistogram (high dynamic rrange histogram), a statistical datatype to calculate exact percentiles in monitoring data. For each minute and server we might have a histogram and when rendering a chart, we merge and aggregate this data in large numbers.

At the moment we use a PHP Extension interfacing with a C library to use this datatype.

I have ported the necessary code to PHP to test this with the JIT, without the JIT and against the PHP extension.

<?php

function simulate_hdr() {
    $hdr = hdr_init(1, 1000, 1);
    for ($i = 1; $i < 1000; $i++) {
        for ($j = 0; $j < 1000; $j++) {
            hdr_record_value($hdr, $i);
        }
    }
    hdr_value_at_percentile($hdr, 95);
}


for ($i = 0; $i < 5; $i++) {
    $time = microtime(true);
    simulate_hdr();

    echo number_format(microtime(true) - $time, 4) . "\n";
}

Again take the numbers with a grain of salt, these are just here to show the approximate relationships:

Runs PHP nojit PHP jit C/PHP Ext
1 0.5916 0.3671 0.0775
2 0.6322 0.4038 0.0775
3 0.6025 0.3866 0.0799
4 0.6010 0.3892 0.0829
5 0.6137 0.3947 0.0828
Average 0.6082 0.3883 0,0801
% 100,00% 63,84% 13,17%

As you can see, the JIT code runs at roughly 2/3 (63,84%) of the original non-jitted code and gets into the region of twice as fast that the RFC claims for PHPs internal benchmark. The improvement is much better than with the react-php-redis server example from a few days ago, where the improvement was only in the 5-20% region.

But compared to implementing this code directly in C as a PHP extension, even the jitted code is still 5 times slower.

Yes, with the JIT there is a massive improvement of this CPU bound problem, but it doesn’t mean we can now re-implement all PHP extensions in pure PHP and rely on the JIT to make them perform.

What the JIT does improve:

  • Make the parts of CPU bound problems that are written in PHP (!) faster.

What the JIT does not improve:

  • It does not improve performance of already fast internal functions written in C, for example hashing, encryption functions.
  • It does not improve performance (by a lot) for I/O bound problems.

To close the gap between JIT and C, we could look at PHP 7.4 including the FFI extension. It allows interfacing with C code more easily from PHP. Anthony Ferrara is building his “php-compiler” project on top FFI that would allow compiling a subset of PHP code directly to an FFI C extension.

Playing with the PHP JIT

The PHP JIT RFC is a hot topic on the internals list right now and the voting has started for it to be included in PHP 8.0 and as experimental feature in 7.4.

I wanted to test it out myself, here are the steps necessary to get started on a Linux (Ubuntu) server (or desktop):

git clone https://github.com/php/php-src.git
cd php-src
git remote add zendtech https://github.com/zendtech/php-src.git
git checkout zendtech/jit-dynasm-7.4
./buildconf
./configure  --prefix=/opt/php/php-7.4 --enable-opcache --enable-opcache-jit --with-zlib --enable-zip --enable-json --enable-sockets --without-pear
make -j4
sudo make install

For testing I needed a more realistic problem that was bit more complex than PHPs internal benchmark (which the JIT doubles in speed).

Luckily I came across a good one at Symfony User Group in Cologne this week: The react-php-redis server by @another_clue re-implements redis server in PHP with almost zero PHP extension dependencies. That means the vanilla build from above with no dependencies is enough to get it running.

In addition it fully works with the redis-benchmark command that the original redis-server package includes, so it takes no effort to make some tests. The benchmark pegs the PHP redis server to 100% making it a good candidate for testing JIT.

The code is doing async I/O so the Redis protocol parsing and internal handling should play a significant role in this code that might be optimizable by the JIT.

git clone https://github.com/clue/php-redis-server.git
cd php-redis-server/
composer install

I ran it without the JIT:

/opt/php/php-7.4/bin/php bin/redis-server.php --port 6380

And with the JIT use these flags:

/opt/php/php-7.4/bin/php -dopcache.enable_cli=1 -dopcache.jit_buffer_size=50000000 -dopcache.jit=1235 bin/redis-server.php --port 6380

The -dopcache.jit=1235 only jits the HOT functions that are called often.

Then to check their performance in relation to each other, I ran redis-benchmark -p 6380 -q against the servers (from the redis-tools package).

Don’t take my numbers for gold (I ran them on my busy desktop machine), but you can see a 4-23% improvement depending on the benchmarked command.

Benchmark Nojit Jit % change
PING_INLINE 30674.85 31877.59 3.92%
PING_BULK 87873.46 95969.28 9.21%
SET 81766.15 87336.24 6.81%
GET 81433.22 91575.09 12.45%
INCR 77881.62 83682.01 7.45%
LPUSH 71275.84 79617.83 11.70%
RPUSH 67294.75 79239.3 17.75%
LPOP 73529.41 84530.86 14.96%
RPOP 76103.5 80450.52 5.71%
SADD 84745.77 89686.1 5.83%
HSET 82712.98 91074.68 10.11%
SPOP 87260.03 99700.9 14.26%
LPUSH (needed to benchmark LRANGE) 68493.15 83822.3 22.38%
LRANGE_100 (first 100 elements) 21743.86 26759.43 23.07%
LRANGE_300 (first 300 elements) 9825.11 11923.21 21.35%
LRANGE_500 (first 450 elements) 6819.42 8272.67 21.31%
LRANGE_600 (first 600 elements) 5120.33 5707.11 11.46%
MSET (10 keys) 45998.16 52631.58 14.42%

Looking at the results in a system profiler (perf) I can see that the PHP process is spending a lot of time in I/O functions, so these numbers are not showing the full potential of the JIT with CPU bound code.

Integrate Ansible Vault with 1Password Commandline

We are using Ansible to provision and deploy Tideways in development and production and the Ansible Vault feature to unlock secrets on production. Since we recently introduced 1Password I integrated them both and unlock the Ansible Vault using 1Password.

This way we can centrally change the Ansible Vault password regularly, without any of the developers with access to production/deployment needing to know the actual password.

To make this integration work, you can setup 1Password CLI to query your 1Password vault for secrets after logging in with password and two factor token.

Then you only need a bash script to act as an executable Ansible Vault password file.

First, download and install the 1Password CLI according to their documentation.

Next, you need to login with your 1Password account explicitly passing email, domain and secret key, so that the CLI can store this information in a configuration file.

$ op signin example.1password.com me@example.com
Enter the Secret Key for me@example.com at example.1password.com: A3-**********************************
Enter the password for me@example.com at example.1password.com:
Enter your six-digit authentication code: ******

After this one-time step, you can login more easily by just specifiying op signin example, so I create an alias for this in ~.bash_aliases (I am on Ubuntu).

alias op-signin='eval $(op signin example)'
alias op-logout='op signout && unset OP_SESSION_example'

The eval line makes sure that an environment variable OP_SESSION_example is set for this terminal/shell only with temporary access to your 1Password vault in subsequent calls to the op command. You can use op-logout alias to invalidate this session and logout.

Then I create the bash script in /usr/local/bin/op-vault that is used as Ansible Vault Password File. It needs to fetches the secret and print it to the screen.

#!/bin/bash
VAULT_ID="1234"
VAULT_ANSIBLE_NAME="Ansible Vault"
op get item --vault=$VAULT_ID "$VAULT_ANSIBLE_NAME" |jq '.details.fields[] | select(.designation=="password").value' | tr -d '"'

This one liner uses the command jq to slice the JSON output to print only the password. The tr command trims the double quotes around the password.

Make sure to configure the VAULT_ID and VAULT_ANSIBLE_NAME variables to point to the ID of your vault where the secret is stored in, and its name in the list. To get the UUIDs of all the vaults type op list vaults in your CLI.

Afterwards you can unlock your Ansible Vault with 1Password by calling:

ansible-playbook --vault-password-file=/usr/local/bin/op-vault -i inventory your_playbook.yml

This now only works in the current terminal/shell, when you called op-signin before to enter password and 2 factor token.

More about: Deployment / DevOps / Ansible / Automation