I need to do a parser for propositional logic. I pretend to do it by hand implemented like a recursive descent parser in java.
My question is about the lexer, is it really needed for this job? I mean to define a finite state machine to recognize tokens etc. I have seen some examples about simple parsers for arithmetic and they handle all in a "single parser" relying only on the grammar rules. It doesn't look like they care about a separate independent lexer which provides the tokens for the parser.
Since i want to do this in the most correct way, i ask for advice for this job. Any link to related info is welcome.
A bit more information would be useful, i.e. the grammar you want to use and an example input String. I don't know how much you know about Chomsky's grammar levels, but that's the key. Simplified said, a lexer can parse on word level (Level 3: Regular grammars) and a parser can analyse syntax, too (Level 2: Context-free grammars). (more information here: lexers vs parsers)
It's possible to use a scannerless parser, but I think you would just integrate the lexer in your parser if you write without trying to avoid a lexer. In other words, if you write your program, you would call the part that tokenizes the raw input string the lexer and the one that applies the grammar the parser, if that's how you want to call it. But you shouldn't give a lot about the terms. If you write a parser and don't need a lexer, chances a high, that the lexer is already in your code, but who cares ;) Hope that helps, but feel free to ask if it's still unclear!