loki.frontend.util

Functions

available_frontends([xfail, skip, include_regex])

Provide list of available frontends to parametrize tests with

inline_labels(ir)

Find labels and merge them onto the following node.

match_type_pattern(pattern, sequence)

Match elements in a sequence according to a pattern of their types.

read_file(file_path)

Reads a file and returns the content as string.

sanitize_ir(_ir, frontend[, pp_registry, ...])

Utility function to sanitize internal representation after creating it from the parse tree of a frontend

Classes

ClusterCommentTransformer([mapper, ...])

Combines consecutive sets of Comment into a CommentBlock.

CombineMultilinePragmasTransformer([mapper, ...])

Combine multiline Pragma nodes into single ones.

Frontend(value)

Enumeration to identify available frontends.

InlineCommentTransformer([mapper, ...])

Identify inline comments and merge them onto statements

class Frontend(value)

Bases: IntEnum

Enumeration to identify available frontends.

OMNI = 1

The OMNI compiler frontend

OFP = 2

The Open Fortran Parser

FP = 3

Fparser 2 from STFC

REGEX = 4

Reduced functionality parsing using regular expressions

available_frontends(xfail=None, skip=None, include_regex=False)

Provide list of available frontends to parametrize tests with

To run tests for every frontend, an argument frontend can be added to a test with the return value of this function as parameter.

For any unavailable frontends where HAVE_<frontend> is False (e.g. because required dependencies are not installed), test is marked as skipped.

Use as

..code-block::

@pytest.mark.parametrize(‘frontend’, available_frontends(xfail=[OMNI, (OFP, ‘Because…’)])) def my_test(frontend):

source = Sourcefile.from_file(‘some.F90’, frontend=frontend) # …

Parameters:
  • xfail (list, optional) – Provide frontends that are expected to fail, optionally as tuple with reason provided as string. By default None

  • skip (list, optional) – Provide frontends that are always skipped, optionally as tuple with reason provided as string. By default None

  • include_regex (bool, optional) – Include the REGEX frontend in the list. By default false.

read_file(file_path)

Reads a file and returns the content as string.

This convenience function is provided to catch read errors due to bad character encodings in the file. It skips over these characters and prints a warning for the first occurence of such a character.

class InlineCommentTransformer(mapper=None, invalidate_source=True, inplace=False, rebuild_scopes=False)

Bases: Transformer

Identify inline comments and merge them onto statements

visit_tuple(o, **kwargs)

Visit all elements in a tuple, injecting any one-to-many mappings.

visit_list(o, **kwargs)

Visit all elements in a tuple, injecting any one-to-many mappings.

class ClusterCommentTransformer(mapper=None, invalidate_source=True, inplace=False, rebuild_scopes=False)

Bases: Transformer

Combines consecutive sets of Comment into a CommentBlock.

visit_tuple(o, **kwargs)

Find groups of Comment and inject into the tuple.

visit_list(o, **kwargs)

Find groups of Comment and inject into the tuple.

class CombineMultilinePragmasTransformer(mapper=None, invalidate_source=True, inplace=False, rebuild_scopes=False)

Bases: Transformer

Combine multiline Pragma nodes into single ones.

visit_tuple(o, **kwargs)

Finds multi-line pragmas and combines them in-place.

sanitize_ir(_ir, frontend, pp_registry=None, pp_info=None)

Utility function to sanitize internal representation after creating it from the parse tree of a frontend

It carries out post-processing according to pp_info and applies the following operations:

Parameters:
  • _ir (Node) – The root node of the internal representation tree to be processed

  • frontend (Frontend) – The frontend from which the IR was created

  • pp_registry (dict, optional) – Registry of pre-processing items to be applied

  • pp_info (optional) – Information from internal preprocessing step that was applied to work around parser limitations and that should be re-inserted