.. _numeric: Working with numeric data ######################### Crandas offers a wide range of tools for working with numeric data. These tools allow you to perform various operations on columns, such as arithmetic operations and aggregations, to analyze and process your data. In this guide, we will provide you with an introduction to working with these various operations. First, we will familiarize you with the particularities of numerical types in the VDL. .. warning:: Currently, arithmetic functionality is only available for integers and not for fixed-point numbers. Always make sure that data is of the right format when working with it in the VDL. Integer data types and conversion ================================= Crandas supports signed and unsigned integer numbers with several bit lengths: 8, 16, 24, and 32. For example, a 16-bit unsigned integer is referred to as an ``uint16``; a 24-bit signed integer is referred to as an ``int24``. It is possible to specify the type of integer as a ctype (see :ref:`ctypes_section`), e.g.: .. code:: python import crandas as cd cdf = cd.DataFrame({"vals": [1,2,3]}, ctype={"vals": "int16"}) Crandas does not allow computations to overflow. For example, the following does not work: .. code:: python cdf = cd.DataFrame({"vals": [1,2,3]}, ctype={"vals": "int16"}) cdf = cdf.assign(cube=lambda x: x.vals*x.vals*x.vals) # this overflows In this case, because the column ``vals`` is 16-bit, its cube will be 48-bit, so the above computation fails with the error message ``Integer operation overflow: value does not fit in 32 bits``. .. note:: In many cases, the cube of a 16-int bit can easily fit in 16 bits; clearly 3^3 = 9 can be expressed with 16 bits (it can be expressed with only five). However, as the values in the VDL are `secret-shared` it is very costly to preemptively check whether the result of an operation would fit the current type. Therefore, the VDL assumes that it cannot fit, in order to prevent overflow errors. In this example, it is possible to convert the column ``vals`` to another integer type so that the computation of the cube can succeed, e.g.: .. code:: python cdf = cd.DataFrame({"vals": [1,2,3]}, ctype={"vals": "int16"}) cdf = cdf.assign(vals=lambda x: x.vals.astype("int8")) cdf = cdf.assign(cube=lambda x: x.vals*x.vals*x.vals) # this works Note that the above conversion does not actually check that the column values lie in the specified range. If this is not the case, then the result of computing on the column is undefined. To check that the conversion succeeds, the following can be used: .. code:: python cdf = cd.DataFrame({"vals": [1,2,3]}, ctype={"vals": "int16"}) cdf = cdf.assign(vals=lambda x: x.vals.astype("int8", validate=True)) cdf = cdf.assign(cube=lambda x: x.vals*x.vals*x.vals) # this works Automatic type detection ------------------------ If no ctype is specified for an integer column, the smallest integer type is selected that is valid for the column, e.g.: .. code:: python cdf = cd.DataFrame({"vals": [1,2,3]}) In this example, the column ``vals`` will be derived to be of type ``uint8`` (unsigned 8-bit integer) because all values lie in the range from 0 to 255 (inclusive). Generally, the server is able to execute operations faster for columns that have a smaller range, e.g., filtering on the values of an ``uint8`` column is faster than filtering on the values of a ``uint24`` column. Because the column type is derived from the data, this potentially leaks information. Crandas warns about this by giving a ``ColumnBoundDerivedWarning``, e.g.: ``ColumnBoundDerivedWarning: Column "vals" was automatically derived to be of type uint8``. For more information, see: :ref:`type_detection`. Fixed-point columns --------------------- .. versionadded:: 1.7 Fixed-point columns The VDL also supports the existence of fixed-point columns. Fixed-point is a way to represent non-integer numbers by determining a *fixed* number of digits after the decimal point. This representation is less flexible than *floating-point* numbers, which is the usual way these numbers are represented. However, they are more efficient when doing calculations in MPC. .. note:: Currently, fixed-point columns can only be used in linear and logistic regression. Arithmetic in fixed-points will be added in a later release. When uploading a fixed-point column, we do not only need to name the bounds explicitly, but we might also want to determine the precision. The ``precision`` parameter determines the number of digits after the decimal point. Default precision is ``20`` bits, which roughly corresponds to *six* decimal places. To specify that we are uploading a fixed-point column, we use the name ``fp`` .. code:: python import warnings # Suppress warnings about automatically derived column types warnings.simplefilter("ignore", category=cd.ctypes.ColumnBoundDerivedWarning) cdf = cd.DataFrame({"vals": [1,2,3]}) Boolean columns --------------- As a special case, the ctype ``bool`` will be used for columns that contain the boolean values ``True`` and ``False``, and for columns for which this ctype is explicitly specified: .. code:: python cdf = cd.DataFrame({"col1": [True,True,False], # automatically a bool "col2": [1,0,1]}, # specified using ctype ctype={"col2": "bool"}) Certain operations, such as filtering on a column and using binary operators ``&``, ``|``, work only on boolean columns. In the example below, crandas implicitly understands that the column is boolean. .. code:: python cdf = cd.DataFrame({"vals": [1,2,3], "filter": [1,0,1]}) cdf = cdf[cdf["filter"]] .. note:: The Virtual Data Lake will check whether the column has 0s and 1s, if so it will automatically treat it as a boolean (even if it is not). Inspecting the integer type ---------------------------------- In the current version of crandas, it is not possible to check the integer type directly. The feature is coming soon and will be accessible through ``repr``. .. TODO: Add when issue #552 is fixed Elementwise Operations and Comparisons =========================================== In this section, we will demonstrate how to perform basic arithmetic operations on columns using comparisons in crandas. All of these operations output a :class:`.CSeries` with the result of applying each operation to each row of the columns. .. code:: python df = cd.DataFrame({"a":[1,2,3,4], "b":[5,6,7,8]}) # Equal to df_equal_to = df["a"] == df["b"] # Not equal to df_not_equal_to = df["a"] != df["b"] # Less than df_less_than = df["a"] < df["b"] # Less than or equal to df_less_than_equal = df["a"] <= df["b"] # Greater than df_greater_than = df["a"] > df["b"] # Greater than or equal to df_greater_than_equal = df["a"] >= df["b"] # Add df_add = df["a"] + df["b"] # Subtract df_subtract = df["a"] - df["b"] # Multiply df_multiply = df["a"] * df["b"] .. TODO: .abs() doesn't work so have removed Beyond these standard common operations, crandas has functions that compares two columns and returns a column with wither the maximum or the minimum in each row: .. code:: python # Calculate the minimum between "a" and "b" - returns a series abmin = cd.series_min(df["a"], df["b"]) # Calculate the maximum between "a" and "b" - returns a series abmax = cd.series_max(df["a"], df["b"]) Aggregation operations ========================== In addition to this, crandas provides several aggregation operations, which can be applied to columns. - Sum (:meth:`.CSeriesColRef.sum`) - Mean (:meth:`.CSeriesColRef.mean`) - Variance (:meth:`.CSeriesColRef.var`) - Count (:meth:`.CSeriesColRef.count`) - Maximum (:meth:`.CSeriesColRef.max`) - Minimum (:meth:`.CSeriesColRef.min`) - Sum of squares (:meth:`.CSeriesColRef.sum_squares`) .. code:: python df = cd.DataFrame({"a":[1,2,3,4], "b":[5,6,7,8]}) # Sum of a column a_sum = df["a"].sum() # Mean of a column a_mean = df["a"].mean() # Variance of a column a_var = df["a"].var() # Count number of items of a column (not NULL elements) a_count = df["a"].count() # Maximum of a column a_max = df["a"].max() # Minimum of a column a_min = df["a"].min() # Sum of squares of a column a_sum_of_squares = df["a"].sum_squares() Maximum and minimum can be applied over a whole table, computing the maximum per (numeric) column. .. code:: python # Compute the maximum of each (numeric) column in df df_maximums = df.max() # Compute the minimum of each (numeric) column in df df_minimums = df.min() .. note:: Note the difference between the elementwise functions that compare the minimum/maximum of two columns (:meth:`.CSeriesColRef.max` / :meth:`.CSeriesColRef.min`) and the ones that compute the minimum/maximum of *the elements in one column* (:func:`.crandas.series_min` / :func:`.crandas.series_max`) Working with Vectors ============================ Crandas allows for vector operations on integer vectors. - Element-wise addition - Element-wise product - Inner product (:meth:`.CSeries.inner`) - Vector sum of vector columns (:meth:`.CSeries.vsum`) .. code:: python df = cd.DataFrame({ "col1":[[1,2,3],[3,2,4],[2,3,1]], "col2":[[2,3,3],[4,2,6],[3,3,3]] }) Addition and multiplication work the same way as if we working with scalars: .. code:: python # Addition df["col1"] + df["col2"] # Multiplication df["col1"] * df["col2"] .. TODO: Scalar multiplication #609 Moreover, we also provide vector operations, such as inner product and the sum of the elements of a vector. These two functions (:meth:`.CSeries.inner` and :meth:`.CSeries.vsum`) are implemented as functions of the column, so the notation might be slightly unnatural (in particular for the inner product). .. code:: python # Assign a new column "inner" with the inner product of "col1" and "col2" df = df.assign(inner=df["col1"].inner(df["col2"])) # For each row in col1, add all the elements in that vector df['col1'].vsum() .. hint:: Whenever you want to access the result of a column operation without adding it to a table, use ``as_table()`` like this ``(df["col1"]+df["col2"]).as_table()``. Now that we know how to work with numeric columns as well as the operations we can do with them, in the next section we will learn how to combine multiple tables. This feature is one of the most important to allow for data sharing, as it will allow us to consolidate data from multiple sources into one.