I have been through a tough period with MySQL stored procedure and Entity Framework. To tell the truth, it is really painful to work on Visual Studio with Entity Framework, especially for teamwork. Anyway, back to stored procedure, unlike other advanced programming languages, there are some functions that do not perform like what we expect.

GROUP_CONCAT() has a default max length

GROUP_CONCAT() returns a concatenated string from a group of items, for example,

1
2
3
4
SELECT student_name,
GROUP_CONCAT(test_score)
FROM student
GROUP BY student_name;

The result is trucated to the maximum length that is set by the variable group_concat_max_len, which has a default value of 1024.

Of course, you can reset the value, although the effective maximum length of the return value is constrained by the value of max_allowed_packet. You can change the value at runtime as follows,

1
SET [GLOBAL | SESSION] group_concat_max_len = 65535;

CONCAT() returns NULL if any argument is NULL

I was quite surprised when I learned this, for other programming languages it does not make much sense to return NULL directly if any argument is NULL.

1
2
3
4
5
6
SELECT CONCAT('My', 'S', 'QL');
-- -> 'MySQL'
SELECT CONCAT('My', NULL, 'QL');
-- -> NULL
SELECT CONCAT(14.3);
-- -> '14.3'

An quick walk-around is wrap argument with COALESCE(), which returns the first no-null argument.

1
2
3
4
5
6
SELECT COALESCE(NULL,1);
-- -> 1
SELECT COALESCE(NULL,NULL,NULL);
-- -> NULL
SELECT CONCAT('My', 'S', COALESCE(NULL, 'QL'));
-- -> 'MySQL'

Actually, there is another function CONCAT_WS() that is better at handling NULL argument. CONCAT_WS() stands for Concatenate With Separator and is a special form of CONCAT(). The first argument is the separator for the rest of the arguments. The separator is added between the strings to be concatenated. The separator can be a string, as can the rest of the arguments. If the separator is NULL, the result is NULL.

1
2
3
4
SELECT CONCAT_WS(',','First name','Second name','Last Name');
-- -> 'First name,Second name,Last Name'
SELECT CONCAT_WS(',','First name',NULL,'Last Name');
-- -> 'First name,Last Name'

65536 row size limits

According to MySQL 5.7 Reference Manual,

The internal representation of a MySQL table has a maximum row size limit of 65,535 bytes, even if the storage engine is capable of supporting larger rows

So no matter how many VARCHARs in a row, the total size should not surpass the limit of row size.

Values in VARCHAR columns are variable-length strings. The length can be specified as a value from 0 to 255 before MySQL 5.0.3, and 0 to 65,535 in 5.0.3 and later versions. The effective maximum length of a VARCHAR in MySQL 5.0.3 and later is subject to the maximum row size (65,535 bytes, which is shared among all columns) and the character set used.

Use BLOB or TEXT if you have large amount of text, because they only contribute 9 to 12 bytes toward the row size, data is stored internally in a different area of memory than the row buffer.

Use SELECT … FGOR UPDATE to lock the row

If you query data and then insert or update data with the same transaction, the regular SELECT can not give enough protection, other transactions can update or delete the same rows you just queried. Here InnoDB supports two types of locking reads that offer extra safety, SELECT ... LOCK IN SHARE MODE and SELECT ... FOR UPDATE. Here is an example for the latter.

1
2
SELECT counter_field FROM child_codes FOR UPDATE;
UPDATE child_codes SET counter_field = counter_field + 1;

This is a good example for implementing an incremental counter, especially you have multiple readers and writers. Once a reader get the latest available data, it sets exclusive lock on the row it reads, other readers can no longer read the row unless update operation is done.