Message boards :
CMS Application :
Longer jobs
Message board moderation
Author | Message |
---|---|
Send message Joined: 20 Jan 15 Posts: 1139 Credit: 8,173,156 RAC: 1,778 |
To ease my workload over the holiday season, I've gone back to the longer jobs we used to have -- 4,000 events/job rather than 2,500. (We'd reduced the event number while we checked if we could run many more jobs per batch than our original operation). So jobs will take correspondingly longer, but tasks should stay roughly the same; as I understand it, tasks finish when the totality of completed jobs in the task exceeds 12 hours, with a hard cut-off after 18 hours per task. I don't think this should materially affect your running of tasks, except if you are in the habit of suspending computations frequently, where you might find that jobs are abandoned due to the VM being suspended too often. |
Send message Joined: 16 Aug 15 Posts: 966 Credit: 1,211,816 RAC: 0 |
Are you sure, it has not reverted back to 2500 events/per job? My last 3 tasks indicate that , (roughly tasks started on the 22nd Dec 0.00h UTC or later) |
Send message Joined: 8 Apr 15 Posts: 779 Credit: 12,146,916 RAC: 2,432 |
I just wish I could get more than 2 multi-core tasks on my 8-core pc's Instead of just 2 of the 2-core tasks I would rather be able to get 4 of them so I didn't have to get other work for the other 4 cores. Prefs are set at no limit Mad Scientist For Life |
Send message Joined: 22 Apr 16 Posts: 677 Credit: 2,002,766 RAC: 689 |
Hi Magic, my experience with Atlas is, when you set the preferences for example to 8 tasks, it take a while for Boinc, to use more downloads. |
Send message Joined: 8 Apr 15 Posts: 779 Credit: 12,146,916 RAC: 2,432 |
Hi Magic, I have had my prefs set the same since multi-core started and still get no more than 2 of the 2-core tasks here. Do you get more than that on an 8-core pc? |
Send message Joined: 22 Apr 16 Posts: 677 Credit: 2,002,766 RAC: 689 |
Do you get more than that on an 8-core pc? Boinc had filled with 8 tasks (including the running tasks and the NOT uploaded tasks), when the prefs are set to 8 tasks. The number of CPU's you are using in the tasks is ignored. But... when tasks have a lot of work (running 24 hours for example), than the number of downloaded tasks was reduced. It's a very dynamically process. |
Send message Joined: 20 Jan 15 Posts: 1139 Credit: 8,173,156 RAC: 1,778 |
Are you sure, it has not reverted back to 2500 events/per job? Yes, I accidentally typed the wrong command, ("source command" instead of "source command2") and submitted a batch to the old WMAgent. I've renamed command to command.old now so that won't happen again. :-) |
Send message Joined: 8 Apr 15 Posts: 779 Credit: 12,146,916 RAC: 2,432 |
Do you get more than that on an 8-core pc? What I mean is can anyone get 4 of the 2-core multi-core tasks at the same time and have them running on an 8-core computer? I can only get two of them which of course uses 4 cores........I want to get 4 of the 2-core tasks so I can run all 4 multi-core tasks at the same time which would run all 8 cores. I can only get two of the multi-core tasks from the server so I have to get 4 single core tasks over at LHC just to get all 8 cores running. Mad Scientist For Life |
Send message Joined: 13 Feb 15 Posts: 1188 Credit: 859,751 RAC: 30 |
What I mean is can anyone get 4 of the 2-core multi-core tasks at the same time and have them running on an 8-core computer?I only get 2 CMS-tasks, doesn't matter whether set Max # jobs 4 and Max # CPUs 1 or Max # jobs 4 and Max # CPUs 2. But when I set Max # jobs 4 and Max # CPUs 4 I got 4 tasks. ??? It Seems something is reversed. So you can get 4 tasks and have to work with the app_config.xml for running 4 tasks dual core. |
Send message Joined: 20 Jan 15 Posts: 1139 Credit: 8,173,156 RAC: 1,778 |
After your messages yesterday I realised that my cruncher at work was doing 2x2-core tasks while I had its preferences set to 3x2. I changed it to 5x2 but it's still only running 2x2. I'll play around with it a bit later; I just got it to start two LHC@home tasks now that we have tasks in the queue there again, so I'm loth to interfere with it for a while. |
Send message Joined: 20 Jan 15 Posts: 1139 Credit: 8,173,156 RAC: 1,778 |
Right, I just suspended SETI@Home on my 20-core server, leaving 2x2-core -dev tasks running, and 2x (1 core, of course) LHC@home/CMS tasks as well. When I requested more -dev tasks the response was "not needed". Any thoughts? I think there's an option somewhere to more explicitly spell out BOINC's decisions; I'll look into that later (now's not really the time to be doing such research...). |
Send message Joined: 13 Feb 15 Posts: 1188 Credit: 859,751 RAC: 30 |
Any thoughts? I think there's an option somewhere to more explicitly spell out BOINC's decisions; I'll look into that later (now's not really the time to be doing such research...). With <work_fetch_debug>1</work_fetch_debug> in the log_flags part of the cc_config.xml you get more info about what is requested. After editing and saving the cc_config.xml, just select 'Read config files' from the options menu in BOINC Manager. |
Send message Joined: 20 Jan 15 Posts: 1139 Credit: 8,173,156 RAC: 1,778 |
Cheers, CP, I'll look into that later. Meanwhile, I overslept and let the batch queue run out of jobs, and/or the WMAgent failed. The only WMAgent expert on my list who isn't currently on holidays is at Fermilab, so it may be a few hours before he is able to respond to my alert. |
Send message Joined: 20 Jan 15 Posts: 1139 Credit: 8,173,156 RAC: 1,778 |
OK, no response from anyone with the power to restart the WMgent, so advice as usual, set No New Tasks. |
Send message Joined: 20 Jan 15 Posts: 1139 Credit: 8,173,156 RAC: 1,778 |
Starting to get some response now, but the time-zones are still working against us. :-( |
Send message Joined: 20 Jan 15 Posts: 1139 Credit: 8,173,156 RAC: 1,778 |
OK, we seem to have jobs again. For some reason, I'm totally fagged out. I'll be hitting the hay soon, but maybe surfacing again for the third day of the Melbourne Test later tonight! :-) |
©2024 CERN