cleaned up TODOs, still need to deal with jobs_active, jobs_completed...

This commit is contained in:
2021-01-20 23:21:59 +11:00
parent 5b99855cb5
commit 153be75302

30
TODO
View File

@@ -1,8 +1,5 @@
DDP:
Should use a session per Thread, so maybe
sess={}
sess[job.id]=Session()
etc
Need to use thread-safe sessions per Thread, half-assed version did not work
### DB
@@ -28,31 +25,6 @@ DDP:
id to link to AI_scan
refimg used/found
NewJob should occur per path (or potentially all paths in import_dir), then you know #files for new non-scan jobs
if we make jobs be minimum, then ditch pass, and just use wait_for...
Jobs should be:
scan for files in DIR -> returns knows num_files in DIR
get thumbs for files (in DIR)
TODO: The 2 above lines are in GenerateFileData AND work on all import_dir paths at once, need to split this up (so our current setup would be 5 jobs (1 fail) on borric):
Job-1: Scan images_to_process -> success (num_files_1)
Job-2: Scan C: -> fail (report back to web)
Job-3: scan new_image_dir -> success (num_files_2)
Job-4 (wait on 1): Gen thumbs images_to_process (on num_files_1)
Job-5 (wait on 3): Gen thumbs new_image_dir (on num_files_2)
(worst case if a job waited on job-2, and 2 failed, then auto-fail it.)
process AI (<1 person>) for files (in DIR), e.g.
Job-7: scan 'cam' in images_to_process (num_files_1)
Job-8 (wait for 7): scan 'cam' in new_image_dir (num_files_2)
Job-9 scan 'dad' in images_to_process (num_files_1)
Job-10 (wait fo 9)scan 'dad' in new_image_dir (num_files_2)
etc.
this way we ditch passes
num jobs active, num jobs completed, lets bin them from the pa_job_manager table -> calculate them everytime (simple select count(1) from job where pa_job_state == "Completed")
FE does not really care what 'state' the job engine is in anyway, so maybe we bin that table, make it a local class to pa_job_manager?