Skip to content

Assignment statement grammar (version 3)

Main script for grammar AssignmentStatement3 (version 3) Contains semantic rules to perform type checking and semantic routines to generate intermediate representation (three addresses codes)

author

Morteza Zakeri, (http://webpages.iust.ac.ir/morteza_zakeri/)

date

20201028

Required

  • Compiler generator: ANTLR 4.x
  • Target language(s): Python 3.8.x

Changelog

v3.0

  • Add semantic rules to perferm type checking
  • Add semantic routines to generate intermediate representation (three addresses codes)

Refs

  • Reference: Compiler book by Dr. Saeed Parsa (http://parsa.iust.ac.ir/)
  • Course website: http://parsa.iust.ac.ir/courses/compilers/
  • Laboratory website: http://reverse.iust.ac.ir/

main(args)

Create lexer and parser for language application

Parameters:

Name Type Description Default
args string

command line arguments

required
return None required
Source code in language_apps\assignment_statement_v3\assignment_statement3main.py
def main(args):
    """
    Create lexer and parser for language application

    Args:

        args (string): command line arguments
        return (None):
    """

    # Step 1: Load input source into stream
    stream = FileStream(args.file, encoding='utf8')
    # input_stream = StdinStream()
    print('Input stream:')
    print(stream)
    print('Compiler result:')

    # Step 2: Create an instance of AssignmentStLexer
    lexer = AssignmentStatement3Lexer(stream)
    # Step 3: Convert the input source into a list of tokens
    token_stream = CommonTokenStream(lexer)
    # Step 4: Create an instance of the AssignmentStParser
    parser = AssignmentStatement3Parser(token_stream)
    # Step 5: Create parse tree
    parse_tree = parser.start()
    # Step 6: Create an instance of AssignmentStListener
    my_listener = MyListener()
    walker = ParseTreeWalker()
    walker.walk(t=parse_tree, listener=my_listener)

    quit()
    lexer.reset()
    token = lexer.nextToken()
    while token.type != Token.EOF:
        print('Token text: ', token.text, 'Token line: ', token.line)
        token = lexer.nextToken()