Post: I’ve been experimenting with using AI to break down job descriptions into something closer to “actual expectations” — things like day-to-day responsibilities, core skills vs nice-to-haves, and implied seniority. This started because a lot of JDs feel long but still unclear, so I wanted to see if a model could structure them better. Approach: Input: raw job description text Prompting: a mix of structured prompts (for consistency) + open-ended interpretation Output: summarized responsibilities, inferred skills, and role signals What worked: Models are surprisingly good at identifying implied skills (even when not explicitly stated) Structured prompts improved consistency a lot Limitations: Hallucination risk when the JD is too vague Sometimes overconfident interpretations Struggles with very jargon-heavy roles One interesting thing: when comparing outputs across similar roles, the variation in how companies describe the same job is kind of wild. I put together a simple demo to test this with different job descriptions: https://langa-clean--kealebogaxulu.replit.app/ Curious if others have tried something similar or have thoughts on making this more reliable. submitted by /u/Potential-Stop-1440
Originally posted by u/Potential-Stop-1440 on r/ArtificialInteligence
