olm::decode_base64 now returns the length of the raw decoded data on
success. When given input with an invalid base64 length, it fails early
(before decoding any input) and returns -1.
This also makes the C function _olm_decode_base64 an actual binding of
olm::decode_base64 instead of a wrapper with slightly different
behaviour.
_olm_decode_group_message should initialize all fields of the results
struct before returning. This is because its caller
_decrypt_max_plaintext_length relies on it having initialized these
fields.
Luckily, this only allows one to subvert the version check in
_decrypt_max_plaintext_length, but not the following check that the
ciphertext field is non-null because that field *is* initialized.
Since it's important to keep backwards compatibility introduce a new
function to calculate the MAC using a SAS object.
Modifying the existing functions would break compatibility with older
releases of libolm.
When calculating the MAC for a message using olm_sas_calculate_mac() and
olm_sas_calculate_mac_long_kdf() the resulting MAC will be base64
encoded using _olm_encode_base64().
The _olm_encode_base64() method requires an input buffer and output
buffer to be passed alongside the input length. The method is called
with the same buffer, containing the MAC, for the input buffer as well
as for the output buffer. This results in an incorrectly base64 encoded
MAC.
For example the byte array:
[121, 105, 187, 19, 37, 94, 119, 248, 224, 34, 94, 29, 157, 5,
15, 230, 246, 115, 236, 217, 80, 78, 56, 200, 80, 200, 82, 158,
168, 179, 10, 230]
will be encoded as eWm7NyVeVmXgbVhnYlZobllsWm9ibGxzV205aWJHeHo
instead of as eWm7EyVed/jgIl4dnQUP5vZz7NlQTjjIUMhSnqizCuY
Notice the different value at the 10th character.
The correct result can be independently checked using Python for
example:
>>> from base64 import b64encode
>>> mac = [121, 105, 187, 19, 37, 94, 119, 248, 224, 34, 94, 29, 157, \
5, 15, 230, 246, 115, 236, 217, 80, 78, 56, 200, 80, 200, \
82, 158, 168, 179, 10, 230]
>>> b64encode(bytearray(mac)).rstrip(b"=")
>>> b'eWm7EyVed/jgIl4dnQUP5vZz7NlQTjjIUMhSnqizCuY'
This happens because the encode_base64() method that is used does not support
in-place encoding in the general case. This is because the remainder for a 32
bit input will always be 2 (32 % 6 == 2).
The remainder will be used over here:
c01164f001/src/base64.cpp (L74)
The logic that gets executed if a remainder exists depends on the original input
values, since those already got in-place encoded the whole block will behave
differently if the input buffer is the same as the output buffer.
Change interface to allow the app to get the private part of the
key and instantiate a decryption object from just the private part
of the key.
Changes the function generating a key from random bytes to be
initialising a key with a private key (because it's exactly the
same thing). Exports & imports private key parts as ArrayBuffer at
JS level rather than base64 assuming we are moving that way in
general.
make olm_pickle_* return the lengths of the base64-encoded pickles, rather than
the raw pickle. (From the application's POV, the format of the pickle is
opaque: it doesn't even know that it is base64-encoded. So returning the length
of the raw pickle is particularly unhelpful.)
All the other methods clear their random inputs. This one needs to do the same,
to reduce the risk of the randomness being used elsewhere and leaking key info.
Fixes a segfault when a group message had exactly the length of the mac +
signature.
Also tweak skipping of unknown tags to avoid an extra trip around the loop.