D602 – D602 – Deployment
D602 D602 – Deployment
Colleges: School of Technology
This dataset includes only posts filtered for negative sentiment. Counts reflect discussion volume (posts), not students, and do not measure satisfaction.
5
12
3
Recurring patterns
Unclear or ambiguous instructions and questions
Unclear or missing instructions and examples for MLproject/main.py and how to structure/run Task 2 (MLflow usage, subprocess vs direct runs, file roles)
The main python file in the mlproject yaml file. I run the main.py but it doesn’t work. Can’t find the python scripts for import and clean and poly.
When you created the import and cleaning code for D602 Task 2, did you just write typical python code, or did you have to wrap it in some sort of mlflow code... I was just using subprocess.run, but I understand that may be incorrect. Whatever I'm doing right now feels very wrong as I'm getting some kind of run_uuid error. Yes, I've tried google, course materials, and FAQs... but I'm not finding them.
I don't know what main.py is supposed to do. I'm not really sure what an MLproject file is doing or what I need to write for either of these And then there's the fact that I have no idea what main.py is supposed to do (call the other 3 files I guess, but how exactly I don't know) I went back and watched the MLFlow tutorial stuff on the resources page and I feel equally as lost as when I started
Missing required templates or assessment resources
Provided Task 2 starter script contains MLflow run-handling bug (multiple start_run calls) causing MlflowException and breaking the supplied code
Issue with provided script in task 2 ... It seems to be an issue with the multiple start runs that are in their script. I have also tried the tshooting steps they provide in the FAQ to no avail. ... mlflow.exceptions.MlflowException: Cannot start run with ID 845721ef3e2a4765a3e9fd4502ed51a6 because active run ID does not match environment run ID. Make sure --experiment-name or --experiment-id matches experiment set with set_experiment(), or just use command-line arguments
tooling_environment_misconfiguration_or_guidance
No guidance on acceptable evidence or troubleshooting when MLflow UI shows successful run but command-line processes fail, leaving students unsure what to submit
MLFlow looks successful in UI but fails in CMD. ... still never made any progress and was getting the same error. I thought to check the MLFlow UI and it looks like one of my attempts worked. Im thinking of just submitted proof from the UI. I also get model metrics from the UI. Does this mean it worked?
Note: post_excerpts.json was not available at build time, so some Open post links may fall back to a generic Reddit URL.