Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - stoneageman

Pages: [1] 2 3
Hardware info, tips & tricks / Re: Black and White Ryzen build
« on: 11 June, 2017, 05:00:49 PM »
Have some time this morning to kill. I was able to hit 3.9ghz and 4ghz for cinebench run. No stability test here as I do not intend to crunch at this clock speed.
vcore 1.35
Dram 3200mhz 14-14-14-34
cinebench: 1735

vcore 1.38
Dram 3200mhz 14-14-14-34
cinebench 1786
The following users thanked this post: stoneageman

Goofyx / Re: Project site link
« on: 18 May, 2017, 08:54:35 PM »
Update: Project has moved to
The following users thanked this post: stoneageman

Linux / corefreq-cpu-monitoring
« on: 07 April, 2017, 09:44:19 PM »

Thought this might interest some of you
The following users thanked this post: stoneageman

GIMPS / Re: Linux GIMPS trial factoring on Pascal Nvidia GPU card
« on: 06 April, 2017, 01:01:19 AM »
I found a script to get work and send results back for the program mfaktc. It works great.

Go to the following link to get it. A few lines down on that page there is a tab called "raw" on the right. Click on it to get the script in a better format. Copy and paste the script into a text file and name it

Put it in the same file as the mfaktc program.

To give it permission to run it I did:

   cd mfaktc-0.21
   chmod u+x

An example on how to run it:

  ./ -u username -p password -n 10

      username is your user name on that had been created
      password is your password
      -n 10 tells the program to have a cache of 10 work units. If you leave it off (-n 10) it just gets 1.

The default of the program sends results and gets work every hour. There is a -t option to change this.

There are other options. You can see these in the program around line 340.
The following users thanked this post: stoneageman

Linux / WD PiDrive Node Zero
« on: 27 March, 2017, 01:33:40 AM »
"The WD PiDrive Node Zero is a compact, all-in-one unit that includes a WD PiDrive connected to a Raspberry Pi Zero through a custom adapter board with 2 USB ports. This unit offers an affordable, low-power storage node with on-board compute capabilities. Ideal for video recording, data logging, offline analytics, and applications where stand-alone operation are needed because of network limitations or privacy/security restrictions."

comes with:
WD PiDrive 314GB
Raspberry Pi Zero
USB Adapter board
microSD card (with preloaded software)
mini HDMI adapter cable


Create Your Own Raspberry Pi Home Network Music System

The following users thanked this post: stoneageman

Hardware info, tips & tricks / Re: Run your XEONS at full speed
« on: 16 March, 2017, 11:42:02 PM »
this is what i did.
1. download the latest bios
2. extract the zip file posted by Ace
3. extract the latest bios
4. copy the "X10DAL6_910\UEFI\X10DAL6.910" bios file to the "UBU\UEFI BIOS Updater" folder
5. right click on the UBU.bat , run as administrator
* you should see the command window display your motherboard and bios file correctly.
6. follow the instructions on how to remove the microcode and save the modded bios
7. copy the contents of X10DAL6_910\UEFI to a FAT32 usb drive.
8. rename the original unmodified bios to different name.
9. copy the modded bios to the FAT32 usb drive.
10. rename the modified bios to the original bios name.
11. goto your bios, execute uefi shell manager
12. press any key when prompted
13. from the message locate which drive your usb is mounted, mine is fs1
14. type "fs1:" from the command line
15. use supermicro flash utility, "flash X10DAL6.910" to flash the modded bios
16. after finish, restart with a power cycle.
17. goto to bios cpu info, your microcode should now show N/A
18. disable C-state, goto to your bios cpu power management
19. open up uefi shell manager again
20. type "fs0:" , fs0 is my boot drive
21. copy the "v3x2_mc39.efi" or "v3x2.efi" to fs0:\efi\boot\ , type the following command "copy fs1:\v3x2.efi fs0:\efi\boot\" or "copy fs1:\v3x2_mc39.efi fs0:\efi\boot\"
* this is to make a permanent copy of your efi driver, just in case you remove your usb.
* this is where we should add the efi driver to the boot sequence but supermicro shell doesn't have bcfg
22. goto to the "efi\boot" folder and type "load v3x2_mc39.efi" or "load v3x2.efi"
23. you should see a message indicating your cpu set to full turbo or 2.7ghz
24. exit back into the bios by typing "exit" , and load uefi: windows boot manager
25. after windows has started , goto windows\system32\ look for the file "mcupdate_GenuineIntel.dll" , rename this file or delete file
26. restart, and do step 19, 20, 22, 24
26. goto "00_OC ANANDTECH\05 MICROCODE\0x306F2_27-39" copy "0x39.dat" to "00_OC ANANDTECH\06 VMWARE MICROCODE UPDATER\cpumcupdate2.1"
27. rename "0x39.dat" to "microcode.dat"
28. right on "install.bat" run as administrator
29. your cpu should now be able to run at full speed.
The following users thanked this post: stoneageman

Hardware info, tips & tricks / Ryzen 7 1700 PPW comparison
« on: 16 March, 2017, 01:55:34 AM »
Ok, I had the Ryzen 7 1700 only about 3 days of running on Windows 10 Pro.
I'll do a quick comparison  PPW between the 1700 and my dual E5-2695 es V4.
While these numbers may not 100% correct but I think it should be close.
Need to run like a month or something to get the perfect numbers.
Anyway, I'll pick the points for today 3/15/2017 on both machines:

1700 (16 threads) @ 3.7ghz = 165W
WCG points = 71,655
PPW = 71,655/165 = 434 wcg ppw or 62 boinc ppw

Dual E5-2695 ES V4 (56 threads) @ 2.8ghz = 305W
WCG points = 235,038
PPW = 235,038/305 = 770 wcg ppw or 110 boinc ppw

Note : Both systems are running FAAH but the 2P is running Linux mint and the 1700 is running windows 10.
I'll finish up the cache WUs then I'll install Ubuntu and compare again... Linux to Linux.
If any1 is running a single xeon V4, please post your numbers to compare with the Ryzen.
Ryzen 7 1700X and Ryzen 7 1800X's owners are also welcome  {{{}

The following users thanked this post: stoneageman

Crunchers café / Re: Made me Laugh
« on: 12 March, 2017, 05:37:24 PM »
Woman overwhelmed by a lot of peckers.  <-->
The following users thanked this post: stoneageman

Einstein@home / Alright!
« on: 23 January, 2017, 10:45:41 PM »
I'm in!!!! ^''^
The following users thanked this post: stoneageman

Crunchers café / Re: Made me Laugh
« on: 13 January, 2017, 05:00:07 PM »
Ewe wanna guess what's for dinner?
The following users thanked this post: stoneageman

Crunchers café / Re: Geek Humor
« on: 07 January, 2017, 09:38:42 PM »
The following users thanked this post: stoneageman

World community grid / Re: The teams daily numbers
« on: 28 December, 2016, 04:28:14 PM »
Spiffy layout SaM   ::)  {{{}
The following users thanked this post: stoneageman

World community grid / Re: 12th WCG Birthday Challege
« on: 20 November, 2016, 09:16:58 PM »
One observation that you can experiment with will be in my write up but work out the average time for a wu (pull this from your WCG device statistics or, if you are already  bunkering, BoincTasks) on the rig then work out how many wu's you need for all cores for the 10 days divide that by 1100 and round up to give number of "machines" needed.

Now try to divide that number into the cores you have ie. 72cores, 36 cores *2, 18 cores *4, 9 cores *8, 6 cores *12 etc.

Build that many "machines"
Go to cc.config and give each that number of cores in <ncpus> and run them until they have done enough work to qualify ie get more than just a couple of spare wu's.

On day one of the 10 days preceding the final week adjust cache days and if necessary <ncpus> again for a while to get the required number of wu's loaded into each machine.

Stop network comms and now RUN ALL CONCURRENTLY !!

Each machine is now set to run 100% but on fewer cores at the same time not sharing cores (no lost cycles) so the wu's will last longer... in fact for the whole 10 days so you can just walk away knowing that on dump day you just cycle through each and turn on network comms and adjust preferences/ set no new work as suits you

I shall try an example

Ave time per wu = 1.3 hours 

1100 downloaded wus = 1430 hours work div 72 threads = 19.86 hours

10 days cache in hours = 240

240/19.86 = 12.08

therefore if you started 12 machines each using 6 threads to complete the 1100 wu's each has downloaded say an hour after midnight it takes you all the way.

You can balance the total wu's downloaded by running the primary machine for an hour or two at full cores before suspending it in favour of the new machines

Why go to this trouble?

1.These new instances are pre qualified. They will just load new work
2.Once set you can walk away for the whole of the deadline period
3.Using cc.config makes each new “machine” run concurrently with an equal share of resources so obviating the need to babysit a number of changeovers
4. This can be an issue if you do things serially:
11528   World Community Grid   20-11-2016 10:15   Scheduler request completed: got 0 new tasks   
11529   World Community Grid   20-11-2016 10:15   No tasks sent   
11530   World Community Grid   20-11-2016 10:15   No tasks are available for the applications you have selected. 11531   World Community Grid   20-11-2016 10:15 Tasks are committed to other platforms   

If you set them all up in BoincTasks Dumping is just a couple of clicks

EDIT: here are two sharing ....2 machines half cores each and good cpu%

2nd EDIT:

Thinking on this some more and testing this morning..... if you load and run the primary install for an hour or two at the beginning then pause it you can turn it on toward the end if you find yourself short of wu's on any machine. It is set to use 100% of resources allocated so you just allocate the number of threads needed using <ncpus>.

Bearing this in mind it ought to be possible to run odd numbers of Machines if you can calculate the thread  resources for each
The following users thanked this post: stoneageman

World community grid / Re: 12th WCG Birthday Challege
« on: 20 November, 2016, 08:13:47 PM »
I have a week to prepare a practice run then I am going to test a full assault on the final Thor week.

For me that equates to 20 rigs, 308 cores/threads just now. so I will be looking for as close as possible to a 3080 day bunker followed by 2156 days running.... target = 5236 days

Hopefully at the end of that I will have a clear idea of the best approach for maximising this and can post my findings.

I have been playing around with multiple instances and have it figured out on windows. (i think)

I am going to try something similar and try to bunker a couple thousound days for thor final week.
The following users thanked this post: stoneageman

World community grid / Re: Are we up for the Thor Challenge ?
« on: 06 November, 2016, 05:50:55 PM »
Been away from XS but just joined up to the new team forum and added 48 threads, should help a little.   )/
The following users thanked this post: stoneageman

Pages: [1] 2 3