Processors

 

The Maya Block Begin now uses Services by default, and has a button to create a new Maya service definition.

 

The File Pattern TOP now has the option of adding the mtime and file size as attributes, when creating a work item for each matched file.

 

The new Attribute Dictionary TOP can be used to create or modify dictionary attributes.

 

Attribute from String TOP now has an option when splitting by delimiter to rename the split, splitindex and splitcount attributes.

 

Both the Attribute Create TOP and Attribute Array TOP can now create dictionary attributes.

 

Filter by Expression TOP now supports matching work items using a regular expression.

 

Filter by Range TOP now supports specifying ranges using a Value Pattern.

 

Geometry Import TOP can now import dictionary attribute data from SOPs or geometry files on disk.

 

File Decompress TOP can now process individual .gz files, which are decompressed into the node’s output directory.

 

The URL Request TOP has a new option to save JSON response data as a dictionary attribute.

 

The Python Virtual Environment TOP can now copy a configurable list of modules from $HHP into the venv.

 

The Work Item Import TOP can now deserialize work items stored to a dictionary attribute

 

The JSON Output TOP can now also export work items to an attribute, as a JSON string, Python object, or dictionary.

 

The JSON Input TOP can now be configured to load JSON data from an upstream JSON string, Python object, or dictionary attribute.

 

Partitioners

 

Changed the behavior of the Unique Values merge operation on partitioner nodes so that it preserves the order of values from original work items, instead of being sorted. A new Sorted Unique merge operation has also been added, which has the old behavior.

 

Schedulers

 

Custom scheduler environment variables can now be specified using an external .env file, in addition to the existing multiparm for adding environment variables.

 

Deadline Scheduler TOP now submits work items as jobs instead of tasks.

 

The AWS ECS Scheduler TOP allows you to deploy and manage Docker-based containers to run specific tasks. This node is available for download from the Content Library and does not ship with Houdini.

 

The HQueue Scheduler TOP now supports multiparm for resources.

 

UI/UX


When a work item is in a failed state due to an upstream failure, the work item  inspector will now display the list of failed dependencies. Each entry in the list is a clickable link that opens the MMB panel and moves the network editor view to that particular dependency.

 

Added support for flagging numeric attributes in PDG as timestamps or memory, which influences their formatting in the work item panel.

 

Dictionary attributes can be accessed in parameter expressions using the @attrib syntax. For example the expression @dict.values.4 will access the value at index 4 from an array named “values”, inside of a dictionary attribute named “dict”.

 

APIs

 

PDG now supports storing Dictionary attribute data using the new pdg.attribType.Dict attribute type. Attributes of that type can store one or more dictionary values in an array. The value type itself is exposed through the pdg.Dictionary class, which stores data in the same format as the dictionary attribute type used for geometry. Dictionary attributes can be accessed in both C++ and Python.

 

PDG now has a new pdg.attrib Python expression function intended for use in parameter expressions written in Python. It has the same functionality as the pdgattrib(..) and pdgattribs(..) parameter expression function, and looks up an attribute value from the active work item.

 

PDG work items can now be serialized to pdg.Dictionary objects using the pdg.WorkItem.saveDict method, and deserialized using pdg.WorkItemHolder.addWorkItemFromDict

 

PDG processor nodes that are generating work items dynamically may now be invoked with a list of upstream work items, instead of always generating from a single item. This is because the graph evaluation logic is able to coalesce work item generations that are happening around the same time as a performance optimization. The exact number of work items depends on various factors, like the total number of upstream work items, how frequently that upstream items are cooking, and the overall processing load on the graph itself.