VDL and crandas master

Crandas

  • Preserve len(df) of pandas DataFrames without columns

  • Allow trivial grouping-based join where the right table only has key columns

  • Support for the concatenation of strings. For example: ```python table = cd.DataFrame({“first_name”: [“John”, “Jan”], “last_name”: [“Doe”, “Jansen”]}, auto_bounds=True) full_names = table[“first_name”] + ” ” + table[“last_name”] full_names.open()

  • Add support for upper(), similar to the existing lower(). In addition, it is now also possible to only change the case of specific indices:

    table = cd.DataFrame({"name": ["john", "Jansen"]}, auto_bounds=True)
    table["name"].upper([0]).open() # Returns ["John", "Jansen"]
    table["name"].upper([1, 3, 5]).open() # Returns ["jOhN", "JAnSeN"]
    

1.10.0

The major new feature is expanded support for fixed-point columns.

Crandas

  • Expanded support for fixed-point columns:

    • Fixed point columns now support larger range and precision (96 bits).

    • Fixed point columns now support various statistical functions (min(),max(),sum(),sum_squares(), mean(), var()).

    • Support for arithmetic operations between two fixed point columns, and between fixed-point and integer columns is added. (NB: we do not yet support division; this will be added in a later release.)

    • Support for concatenation of integer and fixed point columns (resulting in a fixed-point column) is added.

    • Support for join and filtering on fixed point columns is added.

    • Parsing of floats on column operations used in operations as filters or assign is supported.

  • The new dropna function removes rows with any missing values from a CDataFrame.

  • The new save can be used to save an object such as a CDataFrame. If persistence is enabled on the server, this means that the object is kept across server restarts. The save command may also be used to attach a name to a computed table, e.g. table.save(name="my_table").

  • The connection file and Session now both have an optional api_token property. This is sent to the server and may be used for authentication purposes.

  • The functions obj.remove() and cd.remove_objects() have been changed to provide more information in case non-existent object(s) are removed.

    BREAKING: when removing multiple objects using cd.remove_objects(lst), the new behavior is to try to remove all objects even if errors are encountered. The old behavior was to abort on the first error. See the documentation for details.

1.9.2

1.9.1

Crandas

No changes.

1.9.0

Crandas

  • The Session object now has two settings modes, depending on whether a VDL connection file is used (recommended method), or whether the endpoint, certificate, and server public keys are specified manually (legacy method). These are reflected in the settings_mode attribute of the Session object.

    When endpoint is set by the user, the Session is set to legacy mode; otherwise, the connection file method is assumed. When the user does not configure anything, the default is to load the default.vdlconn file, residing in the configuration folder (default: ~/.config/crandas, overridable by the CRANDAS_HOME environment variable). The name default.vdlconn can be overriden through the default_connection_file variable. If that file is not present, scan the configuration folder for files with the extension .vdlconn. If there is a single file, use that. If there are multiple, raise an error.

    analyst_key is now a read-write property that returns the nacl SigningKey, and can be set to either a SigningKey, a filename, a path, or None. When set to None, the default key will be loaded. Both the default key file, and the default relative path, depend on the settings mode. For connection file mode, it is analyst.sk and the current working directory in case of a path (Path, or a string that includes a slash “/”); in case of a filename (string that does not include a slash), it is assumed to reside in the configuration folder; for legacy mode it is clientsign.sk and the base_path (to maintain backwards compatibility).

  • Besides the Session object, which is used to configure the connection to the VDL, we introduce Dynaconf for user configuration for settings that are not directly related to the connection. The new method provides an easy way for the user to set variables, either using code, using environment variables, or using a settings file (default: settings.toml in the same configuration folder referred to above).

  • We make displaying progress bars configurable using the show_progress_bar and show_progress_bar_after (for the delay in seconds) variables.

  • To make the configuration folder and display the folder in the user’s file browser, the user can now call python -m crandas config.

  • We support the Any placeholder for get_table

  • We support stepless mode in scripts, that can be manually enabled to remove script_step numbers from certain queries. This can be useful together with the Any placeholder, to have queries that can be executed a variable number of times.

  • Add a map_dummy_handles override in call to get_table

  • In CDataFrame.assign, we now support the use of colum names that correspond to VDL query arguments (e.g. “name”, “bitlength”).

    BREAKING: existing scripts that use these VDL query arguments will now give an error message explaining how these arguments should be specified. Existing authorizations are not affected.

  • Add support for the following operators in regular expressions:

    • {n}: match exactly n times

    • {min,}: match at least min times

    • {,max}: match at most max times

    • {min,max}: match at least min and at most max times

  • Support was added to disable HTTP Keep-Alive in connections to the VDL server. This can help solve connection stability issues. Keep-Alive can be disabled in the connection file by setting keepalive = false. The setting can be overriden by the user by using the keepalive parameter of crandas.connect.

  • Add sort_values function to a CDataFrame, which sorts the dataframe according to a column. Example:

    cdf = cd.DataFrame({"a": [3, 1, 4, 5, 2], "b": [1, 2, 3, 4, 5]}, auto_bounds=True)
    cdf = cdf.sort_values("a")
    

    Currently, sorting on strings is not supported.

  • Add support for groupby on multiple columns and on all non-nullable column types.

    For example, this is now possible:

    cdf = cd.DataFrame({"a": ["foo", "bar", "foo", "bar"], "b": [1, 1, 1, 2]}, auto_bounds=True)
    tab = cdf.groupby(["a", "b"]).as_table()
    sorted(zip(tab["a"].open(), tab["b"].open()))
    

    The parameter name of the groupby is renamed from col to cols to reflect these changes. Currently, a maximum of around 100 000 unique values are supported. Above that, the groupby will fail and give an error message. Note that this is the number of unique values. The number of rows can be significantly higher as long as there are less than 100 000 different values in the groupby column(s). Furthermore, a consequence of the new implementation is that the output is not order-stable anymore but random.

  • Add k-nearest neighbors functionality. This allows the target value of a new data point to be predicted based on the existing data using its k nearest neighbors. Example:

    import crandas as cd
    from crandas.crlearn.neighbors import KNeighborsRegressor
    X_train = cd.DataFrame({"input": [0, 1, 2, 3]}, auto_bounds=True)
    y_train = cd.DataFrame({"output": [0, 0, 1, 1]}, auto_bounds=True)
    X_test = cd.DataFrame({"input": [1]}, auto_bounds=True)
    neigh = KNeighborsRegressor(n_neighbors=3)
    neigh.fit(X_train, y_train)
    neigh.predict_value(X_test)
    

    For more information, see crandas.crlearn.neighbors.KNeighborsRegressor.

  • Add a new aggregator crandas.groupby.any that takes any value from the set of values and is faster than crandas.groupby.max/crandas.groupby.min

  • In the HTTP connection to the VDL server, use retries for certain HTTP requests to improve robustness

  • Add created property to dataframes and other objects indicating the date and time when they were uploaded or computed

  • Handle cancellation of a query by raising a QueryInterruptedError. This replaces the previous behaviour of returning None and printing “Computation cancelled”. In ipython, the “Computation cancelled” message is still shown.

  • In the progress bar for long-running computations, show “no estimate available yet” as long as progress is at 0% (instead of a more cryptic notation).

  • Add functionality to list uploads to the VDL. For more information, see: crandas.stateobject.list_uploads and crandas.stateobject.get_upload_handles.

1.8.1

Crandas fixes

  • crandas.get_table() now ensures connect() is called first

  • Fix upload and decoding of positive numbers of 64 bits In Crandas, trying to upload and download numbers of in the range R = [2^{63}, 2^{64} -1] would previously fail. We fix this issue by mimicking pandas behavior. That is, a number in the range R is returned as an np.uint64. Secondly, w.r.t. uploading, np.uint64, np.uint32, and np.uint16 are now recognized as integers.

1.8.0

Major new features include:

  • Support for bigger (96 bit) integers

  • Progress bars for running queries and the possibility of cancelling running queries

  • Memory usage improvements (client & server)

  • Null value (missing values) support for all column types

  • Searching strings using regular expressions

  • Added a date column type

New features

  • Support for columns with bigger (96 bit) integers

    Just like in the previous version, integers have the ctype int. When specifying the ctype, minimum and maximum bounds for the values can be supplied using the min and max parameters, e.g. int[min=0, max=1000]. Bounds (strictly) between -2^95 and 2^95 are now supported.

    For example, to upload a column "col": [1, 2, 3, 4] as an int use the following ctype spec:

    table = cd.DataFrame({"col":[1, 2, 3, 4]},  ctype={"col": "int[min=1,max=4]"})
    

    as before.

    To force usage of a particular modulus the integer ctype accepts the keyword argument modulus, which can be set to either of the moduli that are hardcoded in crandas.moduli. For example, to force usage of large integers one can run:

    from crandas.moduli import moduli
    table = cd.DataFrame({"col":[1, 2, 3, 4]},  ctype={"col": f"int[min=1,max=4,modulus={moduli[128]}]"})
    

    Notes:

    • crandas will automatically switch to int[modulus={moduli[128]}] if the (derived) bounds do not fit in an int32.

    • crandas will throw an error if the bounds do not fit in an int96.

    We refer to 32-bit integer columns as F64, and 96-bit integer columns as F128, because they are internally represented as 64 and 128 bits numbers, respectively, since we account for a necessary security margin.

    Supported features for large integers:

    • Basic binary arithmetic (+, -, *, ==, <, >, <=, >=) between any two integer columns

    • Groupby and filter on large integers

    • Unary functions on large integer columns, such as mean(), var(), sum(), ...

    • if_else where the 3 arguments guard, ifval, elseval may be any integer column

    • Conversion from 32-bit integer columns to large integer columns via astype and vice versa

    • Vertical concatenation of integer columns based on different moduli

    • Performing a join on columns based on different moduli

    Current limitations:

    • We do not yet support string conversion to large integers

    • json_to_val only allows integers up to int32 yet

    • IntegerList is only defined over F64 yet

    Changes:

    • base.py: deprecated session.modulus

    • crandas.py: class Col and ReturnValue present also the modulus

    • ctypes.py:

      • added support to encode/decode integers of 128 bits

      • made ctype class decoding modulus dependent

    • input.py: mask and unmask are now dependent on the modulus

    • placeholders.py: class Masker now also contains a modulus

    • NEW FILE moduli.py: containing the default moduli for F64 as well as F128.

  • Searching strings and regular expressions

    To search a string column for a particular substring, use the CSeries.contains function:

    table = cd.DataFrame({"col": ["this", "is", "a", "text", "column"]})
    only_is_rows = table["col"].contains("is")
    table[only_is_rows].open()
    

    Regular expressions are also supported, using the new CSeries.fullmatch function:

    import crandas.re
    table = cd.DataFrame({"col": ["this", "is", "a", "text", "column"]})
    starts_with_t = table["col"].fullmatch(cd.re.Re("t.*"))
    table[starts_with_t].open()
    

    Regular expressions support the following operations:

    • |: union

    • *: Kleene star (zero or or more)

    • +: one or more

    • ?: zero or one

    • .: any character (note that this also matches non-printable characters)

    • (, ): regexp grouping

    • [...]: set of characters (including character ranges, e.g., [A-Za-z])

    • \\d: digits (equivalent to [0-9])

    • \\s: whitespace (equivalent to [\\\\ \\t\\n\\r\\f\\v])

    • \\w: alphanumeric and underscore (equivalent to [a-zA-Z0-9_])

    • (?1), (?2), …: substring (given as additional argument to CSeries.fullmatch())

    Regular expressions are represented by the class crandas.re.Re. It uses pyformlang’s functionality under the hood.

  • Efficient text operations for ASCII strings

    The varchar ctype now has an ASCII mode for increased efficiency with strings that do only contain ASCII characters (no “special” characters; all codepoints <= 127). Before this change, we only supported general Unicode strings. Certain operations (in particular, comparison, searching, and regular expression matching), are more efficient for ASCII strings.

    By default, crandas autodetects whether or not the more efficient ASCII mode can be used. This information (whether or not ASCII mode is used) becomes part of the public metadata of the column, and crandas will give a ColumnBoundDerivedWarning to indicate that the column metadata is derived from the data in the column, unless auto_bounds is set to True.

    Instead of auto-detection, it is also possible to explicitly specify the ctype varchar[ascii] or varchar[unicode], e.g.:

    import crandas as cd
    
    # ASCII autodetected: efficient operations available; warning given
    cdf = cd.DataFrame({"a": ["string"]})
    
    # Unicode autodetected: efficient operations not available; warning given
    cdf = cd.DataFrame({"a": ["stri\U0001F600ng"]})
    
    # ASCII annotated; efficient operations available; no warning given
    cdf = cd.DataFrame({"a": ["string"]}, ctype={"a": "varchar[ascii]"})
    
    # Unicode annotated; efficient operations not available; no warning given
    cdf = cd.DataFrame({"a": ["string"]}, ctype={"a": "varchar[unicode]"})
    
  • Running computations can now be cancelled

    Locally aborting a computation (e.g. Ctrl+C) will now cause it to be cancelled on the server as well.

    • Rename crandas.query to crandas.command to be consistent with server-side implementation and to differentiate from the new crandas.queries module

    • Add module crandas.queries providing client-side implementation of the task-oriented VDL query API, and use this for all queries performed via vdl_query. To perform queries, a block-then-poll strategy is used where first, a blocking query with a timeout of 5 seconds is performed, and if the result is not ready then, status update polls are done at a 1 second interval

  • All column types now support missing values

    All ctypes now support a nullable flag, indicating that values may be missing. It may also be specified using a question mark, e.g. varchar?.

  • Progress reporting for long-running queries

    Queries that take at least 5 seconds now result in a progress bar being displayed that estimates the progress of the computation.

    To enable this for Jupyter notebooks, note that crandas should be installed with the notebook dependency flag, see below.

  • Various memory improvements for both server and client

  • Large data uploads and downloads are now automatically chunked

    Uploads are processed in batches of size crandas.ctypes.ENCODING_CHUNK_SIZE.

  • Added a date column type

    Dates can now be encoded using the date ctype.

    • Dates limited between 1901/01/01 - 2099/12/31 for leap year reasons

    • Ability to subtract two dates to get number of days and add days to a date

    • All comparison operators apply for date

    • Created functions for year, month, day and weekday

    • Able to group over dates, merge and filter

    • New ctype DateCtype converts strings (through pd.to_datetime) and python dates (datetime.date, datetime64 and pd.timestamp) into crandas dates

    • Helper subclass of CSeries _DT allows for pandas-style calling of date retrieval functions (col.dt.year) and standard calls (col.year).

Crandas

  • New dependencies: tqdm and pyformlang

  • New dependency flag: notebook, for features related to Jupyter notebooks. Use pip install crandas[notebook] to install these.

  • Dependency urllib3 is updated to ensure ‘assert_hostname = False’ does work as expected

  • Documentation updates

  • Recording or loading a new script when there is already another script active now no longer gives an error, but a warning message is printed instead.

  • feat(crandas): support with_threshold for aggregation

    This adds support for e.g. table["column"].with_threshold(10).sum(). Before this change, with_threshold() was only supported for filtering operations, e.g. table[filter.with_threshold(5)], and not for aggregation operations (min, max, sum, etc.).

    Note that the alternative that worked before table["column"].sum(threshold=5) is still supported, for both aggregation and filtering operations.

    Minor change: supplying both with_threshold() and a threshold argument now raises a ValueError instead of a TypeError when these are different.

  • implement setter for base_path

    The crandas Session objects now supports setting base_path to either a string, a Path, or None. Retrieving the property will always return a Path.

  • Fix problem where calling size() on a groupby object would fail for int32 columns

  • Improved message for auto-determined bounds

    • Collect all auto_bounds warnings from a data upload into a single warning message

    • Allow to set auto_bounds globally in crandas.base.session