Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.
Sign upRunning tasks with varying "core" requirements in same batch job #1324
Comments
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Cross ref #1326 |
This comment has been minimized.
This comment has been minimized.
@annawoodard does the WorkQueueExecutor have the same basic functionality as the htex? That is, is wqex a superset of htex or, if not, what functionality will be given up? |
This comment has been minimized.
This comment has been minimized.
In regards to WorkQueue, each task can specify the number of cores required. However, the WorkQueue executor does not currently specify the number of cores per task, but it shouldn't be a hard feature to implement. With that being said, how will the Parsl app specify the number of cores it needs to run the task, when it is submitted to the WQExecutor? |
This comment has been minimized.
This comment has been minimized.
One possibility is to add it as a keyword argument to the decorator, for example: @python_app(cores=2)
def foo():
return 'Hello, world!' This would be defined here and here for the python and bash app decorators, which would pass it along to the app here. Then at call time before this we could just add it into the kwargs that will be serialized along with the function and passed to the |
This comment has been minimized.
This comment has been minimized.
as a reminder, we generally have tried to avoid mixing resource info and program/app info. I don't know if there's any way to not do this in the context of this issue, however |
This comment has been minimized.
This comment has been minimized.
btovar
commented
Oct 3, 2019
If so, I would recommend: @python_app(resources = {cores=2})
def etc... as the list of resources may get long, and you may not want to have a super long list of attributes. |
This comment has been minimized.
This comment has been minimized.
@danielskatz I don't think my proposal above is in conflict with our 'write once, run anywhere' aspiration. If you know your task has fixed resource requirements, then I don't see the problem with saying so in the code-- that's not going to change. The thing that does change is where you are running it, and that is still nicely factorized out in the config. |
This comment has been minimized.
This comment has been minimized.
btovar
commented
Oct 3, 2019
•
I think one can make the case that 'resources' are really closer to describing the app (kind of like an argument to malloc), rather than the 'resource' where the app will run. |
This comment has been minimized.
This comment has been minimized.
ok, ok ... |
This comment has been minimized.
This comment has been minimized.
@btovar I slightly favor keyword args because in my view it's a bit more natural to document the options, their types and defaults in the docstring:
which can be accessed in the interpreter via |
This comment has been minimized.
This comment has been minimized.
btovar
commented
Oct 3, 2019
Ah yes, that makes sense. |
This comment has been minimized.
This comment has been minimized.
@TomGlanzman The main differences that come to mind are are 1) at the moment WQ is not pip installable, so you would need to do that as a separate step (but my understanding is that it will be very soon), and 2) wqex was added recently so while WQ itself is mature and robust software, there may be a few kinks to iron out with the executor because it is so fresh it hasn't been extensively tested 'in the wild' yet. |
This comment has been minimized.
This comment has been minimized.
Thanks @annawoodard. Is the suggestion that I attempt to migrate to the wqex at some point or that some of its functionality will be incorporated into the htex? (It is not clear to me how wqex might be used at NERSC.) |
TomGlanzman commentedOct 2, 2019
It would be beneficial to run tasks with different core_per_worker requirements within a single batch job. The motivation is to run a heterogeneous set of tasks (tasks with differing core requirements) on the same compute node at NERSC. I have a workflow that generates many tasks (identical code, different data) to run under the same (htex) executor so that the tasks run in the same batch job. Each task, in general, needs a different number of cores. The Cori machine has batch "Haswell" nodes with either 32 cores (and 64 hw threads) or 68 cores (and 272 hw threads). To efficiently utilize the node, one must be able to keep as many core busy as possible.
This request is to support the ability for the user to specify the number of needed cores at task creation time and to have the appropriate bookkeeping performed to avoid oversubscribing a node.
For "SimpleLauncher", this would mean Parsl would have to handle bookkeeping (i.e., #cores available vs in-use) For "SrunLauncher", srun would presumably do the bookkeeping (potentially across multiple nodes).