quiz

Lexical Analysis

Lexical Tokenization Overview

1. What is the primary purpose of the lexical analysis phase in a compiler?

  • A) To generate machine code
  • B) To optimize code
  • C) To convert source code into tokens
  • D) To perform type-checking
C) To convert source code into tokens Explanation

2. What is a common tool used for generating lexical analyzers?

  • A) yacc
  • B) lex
  • C) gcc
  • D) make
B) lex Explanation

3. Which of the following is an example of a token name in lexical tokenization?

  • A)  x = a + b * 2;
  • B) Identifier
  • C) Lexeme
  • D) Syntax tree
B) Identifier Explanation

4. Which method is commonly used by lexers to identify tokens?

  • A) Binary search
  • B) Regular expressions
  • C) Context-free grammar
  • D) Linear regression
B) Regular expressions Explanation

5. What is the typical format of a token generated by a lexical analyzer?

  • A) {Token type, Token value}
  • B) {Token ID, Token value}
  • C) {Token value, Token length}
  • D) {Token name, Token position}
A) {Token type, Token value} Explanation

6. Which phase follows lexical tokenization in the compilation process?

  • A) Machine code generation
  • B) Semantic analysis
  • C) Syntax analysis
  • D) Optimization
C) Syntax analysis Explanation

7. In which language is the lexical tokenization process used to handle indentations at the lexer level due to its block structure rules?

  • A) C
  • B) Python
  • C) Java
  • D) JavaScript
B) Python Explanation

8. What does the term “maximal munch” refer to in lexical tokenization?

  • A) The process of minimizing the number of tokens
  • B) The rule of processing the input until reaching a character is not acceptable for the current token
  • C) The optimization of token evaluation
  • D) The conversion of tokens into numerical values
B) The rule of processing the input until reaching a character is not acceptable for the current token Explanation

9. During the lexical analysis phase of compilation, which of the following is typically discarded?

  • A)  Keywords
  • B) Identifiers
  • C) Operators
  • D) Whitespace and comments
D) Whitespace and comments Explanation

10. What does a lexer do with invalid tokens during lexical analysis?

  • A) It ignores them
  • B) It reports an error
  • C) It converts them to valid tokens
  • D) It stores them for later use
B) It reports an error Explanation