
NOD32 AntiVirus 12 patch Archives

NOD32 AntiVirus 12 patch Archives
Re: [AMBER] experiences with EVGA GTX TITAN Superclocked - memtestG80 - UNDERclocking in Linux ?
: Wed, 19 Jun 2013 10:03:18 -0700
Hey Jonathan,
Thanks for the 780 numbers! The problem really does seem Titan-specific.
I'd like to get a few more repros of your work before I conclude that as a
sample size of 1 is intriguing but not conclusive.
On Wed, Jun 19, 2013 at 9:34 AM, Jonathan Gough
<jonathan.d.gough.gmail.com>wrote:
> FWIW I posted GTX 780 results
>
> here http://archive.ambermd.org/201306/0207.html
>
> and here
>
> http://archive.ambermd.org/201306/0211.html
>
>
> If you would like me to test anything else, let me know.
>
> Would Nvidia be willing to trade me a GTX 780 for my Titan?
>
>
>
> On Wed, Jun 19, 2013 at 11:50 AM, Scott Le Grand <varelse2005.gmail.com
> >wrote:
>
> > Hey Marek,
> > No updates per se. I had a theory about what was going on that proved to
> > be wrong after testing, but I'm still waiting on NVIDIA to report
> something
> > beyond having reproed the problem.
> >
> > Really really really interested in GTX 780 data right now...
> >
> >
> >
> > On Wed, Jun 19, 2013 at 8:20 AM, Marek Maly <marek.maly.ujep.cz> wrote:
> >
> > > Hi all,
> > >
> > > just a small update from my site.
> > >
> > > As I have yesterday obtained announcement that the CUDA 5.5 is
> > > now available for public (not just for developers).
> > >
> > > I downloaded it from here:
> > >
> > > https://developer.nvidia.com/**cuda-pre-production<
> > https://developer.nvidia.com/cuda-pre-production>
> > >
> > > It is still "just" release candidate ( as all Amber/Titan club members
> > > perfectly know :)) ).
> > >
> > > So I installed this newest release and recompiled Amber cuda code.
> > >
> > > I was hoping that maybe there was "silently" incorporated some
> > > improvement (e.g. in cuFFT) as the result e.g. of Scott's bug report.
> > >
> > > The results of my 100K tests are attached. It seems that comparing to
> my
> > > latest
> > > tests with CUDA 5.5. release candidate from June 3rd (when it was
> > > accessible just for CUDA developers in the form of *.run binary
> > installer)
> > > there
> > > is some slight improvement - e.g. my more stable TITAN was able to
> finish
> > > successfully
> > > all the 100K tests including Cellulose twice. But there is still an
> issue
> > > with JAC NVE/NPT irreproducible results. On my "less stable" TITAN the
> > > results are slightly better
> > > then those older ones as well but still not err free (JAC/CELLULOSE) -
> > see
> > > attached file.
> > >
> > > FACTOR IX NVE/NPT finished again with 100% reproducibility on both GPUs
> > as
> > > usually.
> > >
> > > Scott, do you have any update regarding the "cuFFT"/TITAN issue which
> you
> > > reported/described
> > > to NVIDIA guys ? The latest info from you regarding this story was,
> that
> > > they were able to
> > > reproduce the "cuFFT"/TITAN error as well. Do you have any more recent
> > > information ? How long
> > > time it might take to NVIDIA developers to fully solve such problem in
> > > your opinion ?
> > >
> > > Another thing. It seems that you successfully solved the "GB/TITAN"
> > > problem in case of bigger molecular systems, here is your relevant
> > message
> > > form June 7th.
> > >
> > > ------------------------------**----------------
> > >
> > > Really really interesting...
> > >
> > > I seem to have found a fix for the GB issues on my Titan - not so
> > > surprisingly, it's the same fix as on GTX4xx/GTX5xx...
> > >
> > > But this doesn't yet explain the weirdness with cuFFT so we're not done
> > > here yet...
> > > ------------------------------**---------------
> > >
> > > It was already after the latest Amber12 bugfix18 was released and there
> > > was no additional
> > > bugfix released from that moment. So the "GB/TITAN" patch will be
> > > released later maybe as the part of some bigger bugfix ? Or you simply
> > > additionally included it into bugfix 18 after it's release ?
> > >
> > >
> > > My last question maybe deserves the new separate thread, but anyway
> would
> > > be interesting
> > > to have some information how "Amber-stable" are GTX780 comparing to
> > TITANS
> > > (of course based
> > > on experience of more users or on testing more than 1 or 2 GTX780
> GPUs).
> > >
> > > Best wishes,
> > >
> > > Marek
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > Dne Mon, 03 Jun 2013 01:57:36 +0200 Marek Maly <marek.maly.ujep.cz>
> > > napsal/-a:
> > >
> > >
> > > Hi here are my results with CUDA 5.5
> > >> (Total energy at step 100K(PME)/1000K(GB) (driver 319.23, Amber12
> bugfix
> > >> 18 applied, cuda 5.5))
> > >>
> > >>
> > >> No significant differences comparing the previous test with CUDA 5.0
> > >> (I also added those data to the attached table with CUDA 5.5 test).
> > >>
> > >> Still the same trend instability in JAC tests, perfect stability and
> > >> reproducibility
> > >> in FACTOR_IX tests (interesting isn't it ? especially if we consider
> 23K
> > >> atoms
> > >> in JAC case and 90K atoms in case of FACTOR_IX). Again the same
> crashes
> > in
> > >> CELLULOSE
> > >> test now also in case of TITAN_1. Also in stable and reproducible
> > >> FACTOR_IX slightly
> > >> changed the final energy values comparing to CUDA 5.0 case.
> > >>
> > >> GB simulations (1M steps) again perfectly stable and reproducible.
> > >>
> > >> So to conclude, Scott we trust you :)) !
> > >>
> > >> If you have any idea what to try else (except GPU bios editing,
> perhaps
> > >> too
> > >> premature step at this moment) let me know. I got just last idea,
> > >> which could be perhaps to try change rand seed and see if it has any
> > >> influence in actual trends (e.g. JAC versus FACTOR_IX).
> > >>
> > >> TO ET : I am curious about your test in single GPU configuration.
> > >> Regarding
> > >> to your Win tests, in my opinion it is just wasting of time. They
> > perhaps
> > >> tells
> > >> you just something about the GPU performance not about the eventual
> GPU
> > >> "soft" errs.
> > >>
> > >> If intensive memtestG80 and/or cuda_memtest results were negative
> there
> > is
> > >> in my opinion
> > >> very unlikely that Win performace testers will find any errs, but I am
> > not
> > >> an expert
> > >> here ...
> > >>
> > >> Anyway If you learn which tests the ebuyer is using to confirm GPU
> errs,
> > >> let us know.
> > >>
> > >> M.
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> Dne Sun, 02 Jun 2013 19:22:54 +0200 Marek Maly <marek.maly.ujep.cz>
> > >> napsal/-a:
> > >>
> > >> Hi so I finally succeeded to compile GPU Amber part under CUDA 5.5
> > >>> (after "hacking" of the configure2 file) with common results in
> > >>> consequent tests:
> > >>>
> > >>> ------
> > >>> 80 file comparisons passed
> > >>> 9 file comparisons failed
> > >>> 0 tests experienced errors
> > >>> ------
> > >>>
> > >>> So now I am running the 100K(PME)/1000K(GB) repetitive benchmark
> tests
> > >>> under
> > >>> this configuration: drv. 319.23, CUDA 5.5. , bugfix 18 installed
> > >>>
> > >>> When I finish it I will report results here.
> > >>>
> > >>> M.
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> Dne Sun, 02 Jun 2013 18:44:23 +0200 Marek Maly <marek.maly.ujep.cz>
> > >>> napsal/-a:
> > >>>
> > >>> Hi Scott thanks for the update !
> > >>>>
> > >>>> Anyway any explanation regarding "cuFFT hypothesis" why there are no
> > >>>> problems
> > >>>> with GTX 580, GTX 680 or even K20c ???
> > >>>>
> > >>>>
> > >>>> meanwhile I also tried to recompile GPU part of Amber with
> > >>>> cuda 5.5 installed before, I have obtained these errs
> > >>>> already in configure phase:
> > >>>>
> > >>>> --------
> > >>>> [root.dyn-138-272 amber12]# ./configure -cuda -noX11 gnu
> > >>>> Checking for updates...
> > >>>> Checking for available patches online. This may take a few
> seconds...
> > >>>>
> > >>>> Available AmberTools 13 patches:
> > >>>>
> > >>>> No patches available
> > >>>>
> > >>>> Available Amber 12 patches:
> > >>>>
> > >>>> No patches available
> > >>>> Searching for python2... Found python2.6: /usr/bin/python2.6
> > >>>> Error: Unsupported CUDA version 5.5 detected.
> > >>>> AMBER requires CUDA version == 4.2 .or. 5.0
> > >>>> Configure failed due to the errors above!
> > >>>> ---------
> > >>>>
> > >>>> so it seems that Amber is possible to compile only with CUDA 4.2 or
> > 5.0
> > >>>> at
> > >>>> the moment:
> > >>>>
> > >>>> and this part of configure2 file has to be edited:
> > >>>>
> > >>>>
> > >>>> -----------
> > >>>> nvcc="$CUDA_HOME/bin/nvcc"
> > >>>> sm35flags='-gencode arch=compute_35,code=sm_35'
> > >>>> sm30flags='-gencode arch=compute_30,code=sm_30'
> > >>>> sm20flags='-gencode arch=compute_20,code=sm_20'
> > >>>> sm13flags='-gencode arch=compute_13,code=sm_13'
> > >>>> nvccflags="$sm13flags $sm20flags"
> > >>>> cudaversion=`$nvcc --version | grep 'release' | cut -d' ' -f5 |
> > cut
> > >>>> -d',' -f1`
> > >>>> if [ "$cudaversion" == "5.0" ]; then
> > >>>> echo "CUDA Version $cudaversion detected"
> > >>>> nvccflags="$nvccflags $sm30flags $sm35flags"
> > >>>> elif [ "$cudaversion" == "4.2" ]; then
> > >>>> echo "CUDA Version $cudaversion detected"
> > >>>> nvccflags="$nvccflags $sm30flags"
> > >>>> else
> > >>>> echo "Error: Unsupported CUDA version $cudaversion detected."
> > >>>> echo "AMBER requires CUDA version == 4.2 .or. 5.0"
> > >>>> exit 1
> > >>>> fi
> > >>>> nvcc="$nvcc $nvccflags"
> > >>>>
> > >>>> fi
> > >>>>
> > >>>> -----------
> > >>>>
> > >>>> would it be just OK to change
> > >>>> "if [ "$cudaversion" == "5.0" ]; then"
> > >>>>
> > >>>> to
> > >>>>
> > >>>> "if [ "$cudaversion" == "5.5" ]; then"
> > >>>>
> > >>>>
> > >>>> or some more flags etc. should be defined here to proceed
> > successfully ?
> > >>>>
> > >>>>
> > >>>> BTW it seems Scott, that you are on the way to isolate the problem
> > soon
> > >>>> so maybe it's better to wait and not to loose time with cuda 5.5
> > >>>> experiments.
> > >>>>
> > >>>> I just thought that cuda 5.5 might be more "friendly" to Titans :))
> > e.g.
> > >>>> in terms of cuFFT function ....
> > >>>>
> > >>>>
> > >>>> I will keep fingers crossed :))
> > >>>>
> > >>>> M.
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>> Dne Sun, 02 Jun 2013 18:33:52 +0200 Scott Le Grand
> > >>>> <varelse2005.gmail.com>
> > >>>> napsal/-a:
> > >>>>
> > >>>> PS this *might* indicate a software bug in cuFFT, but it needs more
> > >>>>> characterization... And things are going to get a little stream of
> > >>>>> consciousness from here because you're getting unfiltered raw data,
> > so
> > >>>>> please don't draw any conclusions towards anything yet - I'm just
> > >>>>> letting
> > >>>>> you guys know what I'm finding out as I find it...
> > >>>>>
> > >>>>>
> > >>>>>
> > >>>>> On Sun, Jun 2, 2013 at 9:31 AM, Scott Le Grand
> > >>>>> <varelse2005.gmail.com>wrote:
> > >>>>>
> > >>>>> And bingo...
> > >>>>>>
> > >>>>>> At the very least, the reciprocal sum is intermittently
> > >>>>>> inconsistent...
> > >>>>>> This explains the irreproducible behavior...
> > >>>>>>
> > >>>>>> And here's the level of inconsistency:
> > >>>>>> 31989.38940628897399 vs
> > >>>>>> 31989.39168370794505
> > >>>>>>
> > >>>>>> That's error at the level of 1e-7 or a somehow missed
> > single-precision
> > >>>>>> transaction somewhere...
> > >>>>>>
> > >>>>>> The next question is figuring out why... This may or may not
> > >>>>>> ultimately
> > >>>>>> explain the crashes you guys are also seeing...
> > >>>>>>
> > >>>>>>
> > >>>>>>
> > >>>>>> On Sun, Jun 2, 2013 at 9:07 AM, Scott Le Grand
> > >>>>>> <varelse2005.gmail.com>wrote:
> > >>>>>>
> > >>>>>>
> > >>>>>>> Observations:
> > >>>>>>> 1. The degree to which the reproducibility is broken *does*
> appear
> > to
> > >>>>>>> vary between individual Titan GPUs. One of my Titans breaks
> within
> > >>>>>>> 10K
> > >>>>>>> steps on cellulose, the other one made it to 100K steps twice
> > without
> > >>>>>>> doing
> > >>>>>>> so leading me to believe it could be trusted (until yesterday
> > where I
> > >>>>>>> now
> > >>>>>>> see it dies between 50K and 100K steps most of the time).
> > >>>>>>>
> > >>>>>>> 2. GB hasn't broken (yet). So could you run myoglobin for 500K
> and
> > >>>>>>> TRPcage for 1,000,000 steps and let's see if that's universal.
> > >>>>>>>
> > >>>>>>> 3. Turning on double-precision mode makes my Titan crash rather
> > than
> > >>>>>>> run
> > >>>>>>> irreproducibly, sigh...
> > >>>>>>>
> > >>>>>>> So whatever is going on is triggered by something in PME but not
> > GB.
> > >>>>>>> So
> > >>>>>>> that's either the radix sort, the FFT, the Ewald grid
> > interpolation,
> > >>>>>>> or the
> > >>>>>>> neighbor list code. Fixing this involves isolating this and
> > figuring
> > >>>>>>> out
> > >>>>>>> what exactly goes haywire. It could *still* be software at some
> > very
> > >>>>>>> small
> > >>>>>>> probability but the combination of both 680 and K20c with ECC off
> > >>>>>>> running
> > >>>>>>> reliably is really pointing towards the Titans just being clocked
> > too
> > >>>>>>> fast.
> > >>>>>>>
> > >>>>>>> So how long with this take? Asking people how long it takes to
> > fix a
> > >>>>>>> bug
> > >>>>>>> never really works out well. That said, I found the 480 bug
> > within a
> > >>>>>>> week
> > >>>>>>> and my usual turnaround for a bug with a solid repro is <24
> hours.
> > >>>>>>>
> > >>>>>>> Scott
> > >>>>>>>
> > >>>>>>> On Sun, Jun 2, 2013 at 7:58 AM, Marek Maly <marek.maly.ujep.cz>
> > >>>>>>> wrote:
> > >>>>>>>
> > >>>>>>> Hi all,
> > >>>>>>>>
> > >>>>>>>> here are my results after bugfix 18 application (see
> attachment).
> > >>>>>>>>
> > >>>>>>>> In principle I don't see any "drastical" changes.
> > >>>>>>>>
> > >>>>>>>> FACTOR_IX still perfectly stable/reproducible on both cards,
> > >>>>>>>>
> > >>>>>>>> JAC tests - problems with finishing AND/OR reproducibility the
> > >>>>>>>> same CELLULOSE_NVE although here it seems that my TITAN_1
> > >>>>>>>> has no problems with this test (but the same same trend I saw
> also
> > >>>>>>>> before bugfix 18 - see my older 500K steps test).
> > >>>>>>>>
> > >>>>>>>> But anyway bugfix 18 brought here one change.
> > >>>>>>>>
> > >>>>>>>> The err
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>> #1 ERR writtent in mdout:
> > >>>>>>>> ------
> > >>>>>>>> | ERROR: max pairlist cutoff must be less than unit cell max
> > >>>>>>>> sphere
> > >>>>>>>> radius!
> > >>>>>>>> ------
> > >>>>>>>>
> > >>>>>>>> was substituted with err/warning ?
> > >>>>>>>>
> > >>>>>>>> #0 no ERR writtent in mdout, ERR written in standard output
> > >>>>>>>> (nohup.out)
> > >>>>>>>> -----
> > >>>>>>>> Nonbond cells need to be recalculated, restart simulation from
> > >>>>>>>> previous
> > >>>>>>>> checkpoint
> > >>>>>>>> with a higher value for skinnb.
> > >>>>>>>>
> > >>>>>>>> -----
> > >>>>>>>>
> > >>>>>>>> Another thing,
> > >>>>>>>>
> > >>>>>>>> recently I started on another machine and GTX 580 GPU simulation
> > of
> > >>>>>>>> relatively
> > >>>>>>>> big system ( 364275 atoms/PME ). The system is composed also
> from
> > >>>>>>>> the
> > >>>>>>>> "exotic" molecules like polymers. ff12SB, gaff, GLYCAM
> forcefields
> > >>>>>>>> used
> > >>>>>>>> here. I had problem even with minimization part here, having big
> > >>>>>>>> energy
> > >>>>>>>> on the start:
> > >>>>>>>>
> > >>>>>>>> -----
> > >>>>>>>> NSTEP ENERGY RMS GMAX NAME
> > >>>>>>>> NUMBER
> > >>>>>>>> 1 2.8442E+09 2.1339E+02 1.7311E+04 O
> > >>>>>>>> 32998
> > >>>>>>>>
> > >>>>>>>> BOND = 11051.7467 ANGLE = 17720.4706 DIHED =
> > >>>>>>>> 18977.7584
> > >>>>>>>> VDWAALS = ************* EEL = -1257709.6203 HBOND =
> > >>>>>>>> 0.0000
> > >>>>>>>> 1-4 VDW = 7253.7412 1-4 EEL = 149867.0207 RESTRAINT =
> > >>>>>>>> 0.0000
> > >>>>>>>>
> > >>>>>>>> ----
> > >>>>>>>>
> > >>>>>>>> with no chance to minimize the system even with 50 000 steps in
> > both
> > >>>>>>>> min cycles (with constrained and unconstrained solute) and hence
> > >>>>>>>> heating
> > >>>>>>>> NVT
> > >>>>>>>> crashed immediately even with very small dt. I patched Amber12
> > here
> > >>>>>>>> with
> > >>>>>>>> the
> > >>>>>>>> bugfix 18 and the minimization was done without any problem with
> > >>>>>>>> common
> > >>>>>>>> 5000 steps
> > >>>>>>>> (obtaining target Energy -1.4505E+06 while that initial was that
> > >>>>>>>> written
> > >>>>>>>> above).
> > >>>>>>>>
> > >>>>>>>> So indeed bugfix 18 solved some issues, but unfortunately not
> > those
> > >>>>>>>> related to
> > >>>>>>>> Titans.
> > >>>>>>>>
> > >>>>>>>> Here I will try to install cuda 5.5, recompile GPU Amber part
> > with
> > >>>>>>>> this
> > >>>>>>>> new
> > >>>>>>>> cuda version and repeat the 100K tests.
> > >>>>>>>>
> > >>>>>>>> Scott, let us know how finished your experiment with
> downclocking
> > of
> > >>>>>>>> Titan.
> > >>>>>>>> Maybe the best choice would be here to flash Titan directly with
> > >>>>>>>> your
> > >>>>>>>> K20c bios :))
> > >>>>>>>>
> > >>>>>>>> M.
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>> Dne Sat, 01 Jun 2013 21:09:46 +0200 Marek Maly <
> > marek.maly.ujep.cz>
> > >>>>>>>> napsal/-a:
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>> Hi,
> > >>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>> first of all thanks for providing of your test results !
> > >>>>>>>>>
> > >>>>>>>>> It seems that your results are more or less similar to that of
> > >>>>>>>>> mine maybe with the exception of the results on FactorIX tests
> > >>>>>>>>> where I had perfect stability and 100% or close to 100%
> > >>>>>>>>> reproducibility.
> > >>>>>>>>>
> > >>>>>>>>> Anyway the type of errs which you reported are the same which I
> > >>>>>>>>> obtained.
> > >>>>>>>>>
> > >>>>>>>>> So let's see if the bugfix 18 will help here (or at least on
> NPT
> > >>>>>>>>> tests)
> > >>>>>>>>> or not. As I wrote just before few minutes, it seems that it
> was
> > >>>>>>>>> not
> > >>>>>>>>> still
> > >>>>>>>>> loaded
> > >>>>>>>>> to the given server, although it's description is already
> present
> > >>>>>>>>> on
> > >>>>>>>>> the
> > >>>>>>>>> given
> > >>>>>>>>> web page ( see
> > >>>>>>>>> http://ambermd.org/bugfixes12.****html<
> > http://ambermd.org/bugfixes12.**html>
> > >>>>>>>>> <http://ambermd.org/**bugfixes12.html<
> > http://ambermd.org/bugfixes12.html>
> > >>>>>>>>> >).
> > >>>>>>>>>
> > >>>>>>>>> As you can see, this bugfix contains also changes in CPU code
> > >>>>>>>>> although
> > >>>>>>>>> the majority is devoted to GPU code, so perhaps the best will
> be
> > to
> > >>>>>>>>> recompile
> > >>>>>>>>> whole amber with this patch although this patch would be
> perhaps
> > >>>>>>>>> applied
> > >>>>>>>>> even after just
> > >>>>>>>>> GPU configure command ( i.e. ./configure -cuda -noX11 gnu ) but
> > >>>>>>>>> after
> > >>>>>>>>> consequent
> > >>>>>>>>> building, just the GPU binaries will be updated. Anyway I would
> > >>>>>>>>> rather
> > >>>>>>>>> recompile
> > >>>>>>>>> whole Amber after this patch.
> > >>>>>>>>>
> > >>>>>>>>> Regarding to GPU test under linux you may try memtestG80
> > >>>>>>>>> (please use the updated/patched version from here
> > >>>>>>>>> https://github.com/ihaque/****memtestG80<
> > https://github.com/ihaque/**memtestG80>
> > >>>>>>>>> <https://github.com/**ihaque/memtestG80<
> > https://github.com/ihaque/memtestG80>
> > >>>>>>>>> >
> > >>>>>>>>> )
> > >>>>>>>>>
> > >>>>>>>>> just use git command like:
> > >>>>>>>>>
> > >>>>>>>>> git clone
> > >>>>>>>>> https://github.com/ihaque/****memtestG80.git<
> > https://github.com/ihaque/**memtestG80.git>
> > >>>>>>>>> <https://github.**com/ihaque/memtestG80.git<
> > https://github.com/ihaque/memtestG80.git>
> > >>>>>>>>> >**PATCHED_MEMTEST-G80
> > >>>>>>>>>
> > >>>>>>>>> to download all the files and save them into directory named
> > >>>>>>>>> PATCHED_MEMTEST-G80.
> > >>>>>>>>>
> > >>>>>>>>> another possibility is to try perhaps similar (but maybe more
> up
> > to
> > >>>>>>>>> date)
> > >>>>>>>>> test
> > >>>>>>>>> cuda_memtest (
> > >>>>>>>>> http://sourceforge.net/****projects/cudagpumemtest/<
> > http://sourceforge.net/**projects/cudagpumemtest/>
> > >>>>>>>>> <http:**//sourceforge.net/projects/**cudagpumemtest/<
> > http://sourceforge.net/projects/cudagpumemtest/>
> > >>>>>>>>> >).
> > >>>>>>>>>
> > >>>>>>>>> regarding ig value: If ig is not present in mdin, the default
> > value
> > >>>>>>>>> is
> > >>>>>>>>> used
> > >>>>>>>>> (e.g. 71277) if ig=-1 the random seed will be based on the
> > current
> > >>>>>>>>> date
> > >>>>>>>>> and time, and hence will be different for every run (not a good
> > >>>>>>>>> variant
> > >>>>>>>>> for our testts). I simply deleted eventual ig records from all
> > >>>>>>>>> mdins
> > >>>>>>>>> so
> > >>>>>>>>> I
> > >>>>>>>>> assume that in each run the default seed 71277 was
> automatically
> > >>>>>>>>> used.
> > >>>>>>>>>
> > >>>>>>>>> M.
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>> Dne Sat, 01 Jun 2013 20:26:16 +0200 ET <sketchfoot.gmail.com>
> > >>>>>>>>> napsal/-a:
> > >>>>>>>>>
> > >>>>>>>>> Hi,
> > >>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>> I've put the graphics card into a machine with the working GTX
> > >>>>>>>>>> titan
> > >>>>>>>>>> that I
> > >>>>>>>>>> mentioned earlier.
> > >>>>>>>>>>
> > >>>>>>>>>> The Nvidia driver version is: 133.30
> > >>>>>>>>>>
> > >>>>>>>>>> Amber version is:
> > >>>>>>>>>> AmberTools version 13.03
> > >>>>>>>>>> Amber version 12.16
> > >>>>>>>>>>
> > >>>>>>>>>> I ran 50k steps with the amber benchmark using ig=43689 on
> both
> > >>>>>>>>>> cards.
> > >>>>>>>>>> For
> > >>>>>>>>>> the purpose of discriminating between them, the card I believe
> > >>>>>>>>>> (fingers
> > >>>>>>>>>> crossed) is working is called GPU-00_TeaNCake, whilst the
> other
> > >>>>>>>>>> one
> > >>>>>>>>>> is
> > >>>>>>>>>> called GPU-01_008.
> > >>>>>>>>>>
> > >>>>>>>>>> *When I run the tests on GPU-01_008:*
> > >>>>>>>>>>
> > >>>>>>>>>> 1) All the tests (across 2x repeats) finish apart from the
> > >>>>>>>>>> following
> > >>>>>>>>>> which
> > >>>>>>>>>> have the errors listed:
> > >>>>>>>>>>
> > >>>>>>>>>> ------------------------------****--------------
> > >>>>>>>>>> CELLULOSE_PRODUCTION_NVE - 408,609 atoms PME
> > >>>>>>>>>> Error: unspecified launch failure launching kernel kNLSkinTest
> > >>>>>>>>>> cudaFree GpuBuffer::Deallocate failed unspecified launch
> failure
> > >>>>>>>>>>
> > >>>>>>>>>> ------------------------------****--------------
> > >>>>>>>>>> CELLULOSE_PRODUCTION_NPT - 408,609 atoms PME
> > >>>>>>>>>> cudaMemcpy GpuBuffer::Download failed unspecified launch
> > failure
> > >>>>>>>>>>
> > >>>>>>>>>> ------------------------------****--------------
> > >>>>>>>>>> CELLULOSE_PRODUCTION_NVE - 408,609 atoms PME
> > >>>>>>>>>> Error: unspecified launch failure launching kernel
> kNLSkinTest
> > >>>>>>>>>> cudaFree GpuBuffer::Deallocate failed unspecified launch
> failure
> > >>>>>>>>>>
> > >>>>>>>>>> ------------------------------****--------------
> > >>>>>>>>>> CELLULOSE_PRODUCTION_NPT - 408,609 atoms PME
> > >>>>>>>>>> cudaMemcpy GpuBuffer::Download failed unspecified launch
> > failure
> > >>>>>>>>>> grep: mdinfo.1GTX680: No such file or directory
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>> 2) The sdiff logs indicate that reproducibility across the
> two
> > >>>>>>>>>> repeats
> > >>>>>>>>>> is
> > >>>>>>>>>> as follows:
> > >>>>>>>>>>
> > >>>>>>>>>> *GB_myoglobin: *Reproducible across 50k steps
> > >>>>>>>>>> *GB_nucleosome:* Reproducible till step 7400
> > >>>>>>>>>> *GB_TRPCage:* Reproducible across 50k steps
> > >>>>>>>>>>
> > >>>>>>>>>> *PME_JAC_production_NVE: *No reproducibility shown from step
> > 1,000
> > >>>>>>>>>> onwards
> > >>>>>>>>>> *PME_JAC_production_NPT*: Reproducible till step 1,000. Also
> > >>>>>>>>>> outfile
> > >>>>>>>>>> is
> > >>>>>>>>>> not written properly - blank gaps appear where something
> should
> > >>>>>>>>>> have
> > >>>>>>>>>> been
> > >>>>>>>>>> written
> > >>>>>>>>>>
> > >>>>>>>>>> *PME_FactorIX_production_NVE:* Reproducible across 50k steps
> > >>>>>>>>>> *PME_FactorIX_production_NPT:* Reproducible across 50k steps
> > >>>>>>>>>>
> > >>>>>>>>>> *PME_Cellulose_production_NVE:***** Failure means that both
> runs
> > >>>>>>>>>> do
> > >>>>>>>>>> not
> > >>>>>>>>>> finish
> > >>>>>>>>>> (see point1)
> > >>>>>>>>>> *PME_Cellulose_production_NPT: *Failure means that both runs
> do
> > >>>>>>>>>> not
> > >>>>>>>>>> finish
> > >>>>>>>>>> (see point1)
> > >>>>>>>>>>
> > >>>>>>>>>>
> ##############################****############################**
> > >>>>>>>>>> ##**
> > >>>>>>>>>> ###########################
> > >>>>>>>>>>
> > >>>>>>>>>> *When I run the tests on * *GPU-00_TeaNCake:*
> > >>>>>>>>>> *
> > >>>>>>>>>> *
> > >>>>>>>>>> 1) All the tests (across 2x repeats) finish apart from the
> > >>>>>>>>>> following
> > >>>>>>>>>> which
> > >>>>>>>>>> have the errors listed:
> > >>>>>>>>>> ------------------------------****-------
> > >>>>>>>>>> JAC_PRODUCTION_NPT - 23,558 atoms PME
> > >>>>>>>>>> PMEMD Terminated Abnormally!
> > >>>>>>>>>> ------------------------------****-------
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>> 2) The sdiff logs indicate that reproducibility across the
> two
> > >>>>>>>>>> repeats
> > >>>>>>>>>> is
> > >>>>>>>>>> as follows:
> > >>>>>>>>>>
> > >>>>>>>>>> *GB_myoglobin:* Reproducible across 50k steps
> > >>>>>>>>>> *GB_nucleosome:* Reproducible across 50k steps
> > >>>>>>>>>> *GB_TRPCage:* Reproducible across 50k steps
> > >>>>>>>>>>
> > >>>>>>>>>> *PME_JAC_production_NVE:* No reproducibility shown from step
> > >>>>>>>>>> 10,000
> > >>>>>>>>>> onwards
> > >>>>>>>>>> *PME_JAC_production_NPT: * No reproducibility shown from step
> > >>>>>>>>>> 10,000
> > >>>>>>>>>> onwards. Also outfile is not written properly - blank gaps
> > appear
> > >>>>>>>>>> where
> > >>>>>>>>>> something should have been written. Repeat 2 Crashes with
> error
> > >>>>>>>>>> noted
> > >>>>>>>>>> in
> > >>>>>>>>>> 1.
> > >>>>>>>>>>
> > >>>>>>>>>> *PME_FactorIX_production_NVE:* No reproducibility shown from
> > step
> > >>>>>>>>>> 9,000
> > >>>>>>>>>> onwards
> > >>>>>>>>>> *PME_FactorIX_production_NPT: *Reproducible across 50k steps
> > >>>>>>>>>>
> > >>>>>>>>>> *PME_Cellulose_production_NVE: *No reproducibility shown from
> > step
> > >>>>>>>>>> 5,000
> > >>>>>>>>>> onwards
> > >>>>>>>>>> *PME_Cellulose_production_NPT: ** *No reproducibility shown
> from
> > >>>>>>>>>> step
> > >>>>>>>>>> 29,000 onwards. Also outfile is not written properly - blank
> > gaps
> > >>>>>>>>>> appear
> > >>>>>>>>>> where something should have been written.
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>> Out files and sdiff files are included as attatchments
> > >>>>>>>>>>
> > >>>>>>>>>> ##############################****###################
> > >>>>>>>>>>
> > >>>>>>>>>> So I'm going to update my nvidia driver to the latest version
> > and
> > >>>>>>>>>> patch
> > >>>>>>>>>> amber to the latest version and rerun the tests to see if
> there
> > is
> > >>>>>>>>>> any
> > >>>>>>>>>> improvement. Could someone let me know if it is necessary to
> > >>>>>>>>>> recompile
> > >>>>>>>>>> any
> > >>>>>>>>>> or all of AMBER after applying the bugfixes?
> > >>>>>>>>>>
> > >>>>>>>>>> Additionally, I'm going to run memory tests and heaven
> > benchmarks
> > >>>>>>>>>> on
> > >>>>>>>>>> the
> > >>>>>>>>>> cards to check whether they are faulty or not.
> > >>>>>>>>>>
> > >>>>>>>>>> I'm thinking that there is a mix of hardware
> error/configuration
> > >>>>>>>>>> (esp
> > >>>>>>>>>> in
> > >>>>>>>>>> the case of GPU-01_008) and amber software error in this
> > >>>>>>>>>> situation.
> > >>>>>>>>>> What
> > >>>>>>>>>> do
> > >>>>>>>>>> you guys think?
> > >>>>>>>>>>
> > >>>>>>>>>> Also am I right in thinking (from what Scott was saying) that
> > all
> > >>>>>>>>>> the
> > >>>>>>>>>> benchmarks should be reproducible across 50k steps but begin
> to
> > >>>>>>>>>> diverge
> > >>>>>>>>>> at
> > >>>>>>>>>> around 100K steps? Is there any difference from in setting *ig
> > *to
> > >>>>>>>>>> an
> > >>>>>>>>>> explicit number to removing it from the mdin file?
> > >>>>>>>>>>
> > >>>>>>>>>> br,
> > >>>>>>>>>> g
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>> On 31 May 2013 23:45, ET <sketchfoot.gmail.com> wrote:
> > >>>>>>>>>>
> > >>>>>>>>>> I don't need sysadmins, but sysadmins need me as it gives
> > purpose
> > >>>>>>>>>> to
> > >>>>>>>>>>
> > >>>>>>>>>>> their
> > >>>>>>>>>>> bureaucratic existence. A encountered evil if working in an
> > >>>>>>>>>>> institution
> > >>>>>>>>>>> or
> > >>>>>>>>>>> comapny IMO. Good science and indiviguality being sacrificed
> > for
> > >>>>>>>>>>> standardisation and mediocrity in the intrerests of
> maintaing a
> > >>>>>>>>>>> system
> > >>>>>>>>>>> that
> > >>>>>>>>>>> focusses on maintaining the system and not the objective.
> > >>>>>>>>>>>
> > >>>>>>>>>>> You need root to move fwd on these things, unfortunately. and
> > ppl
> > >>>>>>>>>>> with
> > >>>>>>>>>>> root are kinda like your parents when you try to borrow money
> > >>>>>>>>>>> from
> > >>>>>>>>>>> them
> > >>>>>>>>>>> .
> > >>>>>>>>>>> age 12 :D
> > >>>>>>>>>>> On May 31, 2013 9:34 PM, "Marek Maly" <marek.maly.ujep.cz>
> > >>>>>>>>>>> wrote:
> > >>>>>>>>>>>
> > >>>>>>>>>>> Sorry why do you need sysadmins :)) ?
> > >>>>>>>>>>>
> > >>>>>>>>>>>>
> > >>>>>>>>>>>> BTW here is the most recent driver:
> > >>>>>>>>>>>>
> > >>>>>>>>>>>>
> > http://www.nvidia.com/object/****linux-display-amd64-319.23-**<
> > http://www.nvidia.com/object/**linux-display-amd64-319.23-**>
> > >>>>>>>>>>>> driver.html<http://www.nvidia.**com/object/linux-display-**
> > >>>>>>>>>>>> amd64-319.23-driver.html<
> > http://www.nvidia.com/object/linux-display-amd64-319.23-driver.html>
> > >>>>>>>>>>>> >
> > >>>>>>>>>>>>
> > >>>>>>>>>>>> I do not remember anything easier than is to install driver
> > >>>>>>>>>>>> (especially
> > >>>>>>>>>>>> in case of binary (*.run) installer) :))
> > >>>>>>>>>>>>
> > >>>>>>>>>>>> M.
> > >>>>>>>>>>>>
> > >>>>>>>>>>>>
> > >>>>>>>>>>>>
> > >>>>>>>>>>>> Dne Fri, 31 May 2013 22:02:34 +0200 ET <
> sketchfoot.gmail.com>
> > >>>>>>>>>>>> napsal/-a:
> > >>>>>>>>>>>>
> > >>>>>>>>>>>> > Yup. I know. I replaced a 680 and the everknowing
> sysadmins
> > >>>>>>>>>>>> are
> > >>>>>>>>>>>> reluctant
> > >>>>>>>>>>>> > to install drivers not in the repositoery as they are
> lame.
> > :(
> > >>>>>>>>>>>> > On May 31, 2013 7:14 PM, "Marek Maly" <marek.maly.ujep.cz
> >
> > >>>>>>>>>>>> wrote:
> > >>>>>>>>>>>> >>
> > >>>>>>>>>>>> >> As I already wrote you,
> > >>>>>>>>>>>> >>
> > >>>>>>>>>>>> >> the first driver which properly/officially supports
> Titans,
> > >>>>>>>>>>>> should
> > >>>>>>>>>>>> be
> > >>>>>>>>>>>> >> 313.26 .
> > >>>>>>>>>>>> >>
> > >>>>>>>>>>>> >> Anyway I am curious mainly about your 100K repetitive
> tests
> > >>>>>>>>>>>> with
> > >>>>>>>>>>>> >> your Titan SC card. Especially in case of these tests (
> > >>>>>>>>>>>> JAC_NVE,
> > >>>>>>>>>>>> JAC_NPT
> > >>>>>>>>>>>> >> and CELLULOSE_NVE ) where
> > >>>>>>>>>>>> >> my Titans SC randomly failed or succeeded. In
> > FACTOR_IX_NVE,
> > >>>>>>>>>>>> >> FACTOR_IX_NPT
> > >>>>>>>>>>>> >> tests both
> > >>>>>>>>>>>> >> my cards are perfectly stable (independently from drv.
> > >>>>>>>>>>>> version)
> > >>>>>>>>>>>> and
> > >>>>>>>>>>>> also
> > >>>>>>>>>>>> >> the runs
> > >>>>>>>>>>>> >> are perfectly or almost perfectly reproducible.
> > >>>>>>>>>>>> >>
> > >>>>>>>>>>>> >> Also if your test will crash please report the eventual
> > errs.
> > >>>>>>>>>>>> >>
> > >>>>>>>>>>>> >> To this moment I have this actual library of errs on my
> > >>>>>>>>>>>> Titans
> > >>>>>>>>>>>> SC
> > >>>>>>>>>>>> GPUs.
> > >>>>>>>>>>>> >>
> > >>>>>>>>>>>> >> #1 ERR writtent in mdout:
> > >>>>>>>>>>>> >> ------
> > >>>>>>>>>>>> >> | ERROR: max pairlist cutoff must be less than unit
> cell
> > >>>>>>>>>>>> max
> > >>>>>>>>>>>> sphere
> > >>>>>>>>>>>> >> radius!
> > >>>>>>>>>>>> >> ------
> > >>>>>>>>>>>> >>
> > >>>>>>>>>>>> >>
> > >>>>>>>>>>>> >> #2 no ERR writtent in mdout, ERR written in standard
> output
> > >>>>>>>>>>>> (nohup.out)
> > >>>>>>>>>>>> >>
> > >>>>>>>>>>>> >> ----
> > >>>>>>>>>>>> >> Error: unspecified launch failure launching kernel
> > >>>>>>>>>>>> kNLSkinTest
> > >>>>>>>>>>>> >> cudaFree GpuBuffer::Deallocate failed unspecified launch
> > >>>>>>>>>>>> failure
> > >>>>>>>>>>>> >> ----
> > >>>>>>>>>>>> >>
> > >>>>>>>>>>>> >>
> > >>>>>>>>>>>> >> #3 no ERR writtent in mdout, ERR written in standard
> output
> > >>>>>>>>>>>> (nohup.out)
> > >>>>>>>>>>>> >> ----
> > >>>>>>>>>>>> >> cudaMemcpy GpuBuffer::Download failed unspecified launch
> > >>>>>>>>>>>> failure
> > >>>>>>>>>>>> >> ----
> > >>>>>>>>>>>> >>
> > >>>>>>>>>>>> >> Another question, regarding your Titan SC, it is also
> EVGA
> > as
> > >>>>>>>>>>>> in
> > >>>>>>>>>>>> my
> > >>>>>>>>>>>> case
> > >>>>>>>>>>>> >> or it is another producer ?
> > >>>>>>>>>>>> >>
> > >>>>>>>>>>>> >> Thanks,
> > >>>>>>>>>>>> >>
> > >>>>>>>>>>>> >> M.
> > >>>>>>>>>>>> >>
> > >>>>>>>>>>>> >>
> > >>>>>>>>>>>> >>
> > >>>>>>>>>>>> >> Dne Fri, 31 May 2013 19:17:03 +0200 ET <
> > sketchfoot.gmail.com
> > >>>>>>>>>>>> >
> > >>>>>>>>>>>> napsal/-a:
> > >>>>>>>>>>>> >>
> > >>>>>>>>>>>> >> > Well, this is interesting...
> > >>>>>>>>>>>> >> >
> > >>>>>>>>>>>> >> > I ran 50k steps on the Titan on the other machine with
> > >>>>>>>>>>>> driver
> > >>>>>>>>>>>> 310.44
> > >>>>>>>>>>>> >> and
> > >>>>>>>>>>>> >> > it
> > >>>>>>>>>>>> >> > passed all the GB steps. i.e totally identical results
> > over
> > >>>>>>>>>>>> two
> > >>>>>>>>>>>> >> repeats.
> > >>>>>>>>>>>> >> > However, it failed all the PME tests after step 1000.
> I'm
> > >>>>>>>>>>>> going
> > >>>>>>>>>>>> to
> > >>>>>>>>>>>> > update
> > >>>>>>>>>>>> >> > the driver and test it again.
> > >>>>>>>>>>>> >> >
> > >>>>>>>>>>>> >> > Files included as attachments.
> > >>>>>>>>>>>> >> >
> > >>>>>>>>>>>> >> > br,
> > >>>>>>>>>>>> >> > g
> > >>>>>>>>>>>> >> >
> > >>>>>>>>>>>> >> >
> > >>>>>>>>>>>> >> > On 31 May 2013 16:40, Marek Maly <marek.maly.ujep.cz>
> > >>>>>>>>>>>> wrote:
> > >>>>>>>>>>>> >> >
> > >>>>>>>>>>>> >> >> One more thing,
> > >>>>>>>>>>>> >> >>
> > >>>>>>>>>>>> >> >> can you please check under which frequency is running
> > that
> > >>>>>>>>>>>> your
> > >>>>>>>>>>>> >> titan ?
> > >>>>>>>>>>>> >> >>
> > >>>>>>>>>>>> >> >> As the base frequency of normal Titans is 837MHz and
> the
> > >>>>>>>>>>>> Boost
> > >>>>>>>>>>>> one
> > >>>>>>>>>>>> is
> > >>>>>>>>>>>> >> >> 876MHz I
> > >>>>>>>>>>>> >> >> assume that yor GPU is running automatically also
> under
> > >>>>>>>>>>>> it's
> > >>>>>>>>>>>> boot
> > >>>>>>>>>>>> >> >> frequency (876MHz).
> > >>>>>>>>>>>> >> >> You can find this information e.g. in Amber mdout
> file.
> > >>>>>>>>>>>> >> >>
ESET NOD32 Antivirus License Keys (Serial Number / Activation Code) – Updated Regularly
Note
- License keys will .
- If one license key is not available, try another one.
License keys
2020.07
Username | Password | ||
---|---|---|---|
EAV-0275498024 | mxs7pmsfvc | 5MS7-XPVT-K67E-XBT3-6WEF | 15/07/2020 |
EAV-0275601345 | eehjvbahpu | P4C4-XGB8-GDXW-3J7H-KVN8 | 15/07/2020 |
EAV-0275710488 | 6xxka9pxuh | XGN3-XMGJ-NA29-59J2-VAW3 | 17/07/2020 |
EAV-0275726745 | xdr4uc6953 | BK68-XSA4-DAWH-A6JM-9KFG | 17/07/2020 |
EAV-0275747140 | tfp993jhma | ARPW-XV7A-XMHT-2EW8-J7TS | 17/07/2020 |
EAV-0275747148 | nf78xhp8kk | A8UJ-XC4M-4EM6-MGJG-UTR6 | 17/07/2020 |
EAV-0275769953 | xb3h22uu7a | 7EPR-XDUC-W24C-6X3N-MAVH | 18/07/2020 |
EAV-0275769990 | a34hjm652v | DV2N-X2R7-46UT-C7CG-BUKB | 18/07/2020 |
EAV-0275770283 | v3kasxts5v | SP5V-XJ68-R2M2-UWPF-KDPE | 18/07/2020 |
EAV-0275770304 | 489tamd6xb | PT4E-X2TK-JK7A-8CKA-UF5H | 18/07/2020 |
EAV-0275770326 | 28j2vx5h2t | 36XX-X9DN-5RM6-ABJ8-MTVF | 18/07/2020 |
EAV-0275796778 | t6xbaasven | PA64-XNWU-2TVC-NKGC-2D9X | 18/07/2020 |
EAV-0275813236 | hjm6s9ccr4 | N2E3-XEBH-B4FR-VHE8-28T8 | 19/07/2020 |
EAV-0275813987 | aucmtfec7t | EMXD-XPCT-ENSS-3S7X-8UWG | 19/07/2020 |
EAV-0275815117 | vaxtm4nahr | 8V2P-XRGX-JFDS-T9KH-672W | 19/07/2020 |
EAV-0275815282 | fh5frmped9 | 8HKU-X29G-WGAE-BJPP-N6EG | 19/07/2020 |
EAV-0275815295 | rfat5epkk7 | CTXM-XK46-HHAT-F6EF-JBCN | 19/07/2020 |
EAV-0275815307 | 5uf9n9jvae | CHNH-XHGK-C3U6-SRCM-CDWE | 19/07/2020 |
EAV-0275817179 | vc837uta4e | FF2H-X2W5-6PGP-HDT5-WEUK | 19/07/2020 |
EAV-0275821537 | ptaamprmc2 | 5933-XKDW-AK9H-U6DD-P2PF | 19/07/2020 |
EAV-0275821555 | 5cxprh9ac6 | 3MX3-XX5T-3V47-E4TW-FPVX | 19/07/2020 |
EAV-0275821567 | 2h4ec3n2am | 7MUV-XJT2-H5BR-NTGG-CKTR | 19/07/2020 |
EAV-0275821784 | 2xs262a5he | FSF5-X5BB-UXMP-3E83-48A7 | 19/07/2020 |
EAV-0275821799 | murssj4kj7 | FU23-XVMS-KTVN-SVGU-ACHP | 19/07/2020 |
EAV-0275821816 | 77cxb4cucf | KXP9-XDNM-V3NE-DEBS-ASNF | 19/07/2020 |
EAV-0275821832 | h8skedc9j9 | SCKP-XEKB-E45B-MEMP-6GK7 | 19/07/2020 |
EAV-0275822047 | kvsha9udux | HX6A-XBV4-7XHN-VP23-UVSR | 19/07/2020 |
EAV-0275822061 | fxba65pdsr | W8CE-X2D9-PBAR-TVST-VNB9 | 19/07/2020 |
EAV-0275875377 | vc5vks6e54 | 9VHD-XJAD-6UTC-XGUC-8M28 | 20/07/2020 |
EAV-0275879161 | a3efbrp8db | TC5J-X7JH-G6CX-MPSN-SNKU | 20/07/2020 |
EAV-0275944651 | vc295n7nfn | GNNS-XM3B-H7CG-2TNX-4XWK | 21/07/2020 |
EAV-0275947135 | 6ddt9nsu8x | JUC7-XRSN-72CB-EV5M-TNRB | 21/07/2020 |
EAV-0275955022 | v4u2vdmcdb | TJWP-XGHU-UKMV-V7E9-R4GT | 21/07/2020 |
EAV-0275955177 | fejkap7hjf | NP5V-XSAH-SN36-VPPH-VKXF | 21/07/2020 |
EAV-0275955554 | kx3b85afud | SF5M-XXT5-R7TX-CRBR-89ET | 21/07/2020 |
EAV-0275975840 | 6abjsdetx7 | CCT2-XKRB-EA4M-3BKW-NRBK | 21/07/2020 |
EAV-0275975901 | 85rbe8vnca | G8E5-X8BU-68WB-A7PK-EHNM | 21/07/2020 |
EAV-0275976127 | 8akdj33fde | WV25-X26S-VS6N-8BVD-3S4F | 21/07/2020 |
EAV-0275976143 | pjc4jept8n | B4PR-XN2W-JVCV-3GD5-H7BB | 21/07/2020 |
EAV-0275976156 | x7vb2nb4ae | S5SP-X42X-NWVA-KHM6-HA9E | 21/07/2020 |
EAV-0275979676 | a49v9mpps6 | 2STP-XX27-V3VR-WMGH-KB2K | 21/07/2020 |
EAV-0276049603 | 692frkrt4p | D2PW-XU7K-5EAN-W7WX-9K7X | 23/07/2020 |
EAV-0276055949 | n89vnsvf6k | WJ9P-X886-V7C9-BHUE-A84F | 23/07/2020 |
EAV-0276061170 | ej7t2ptmuv | 87F5-X2AW-A9TE-MABD-W5RF | 23/07/2020 |
EAV-0276138823 | bn82uke6f3 | 8UWH-XVR5-R27R-MUX5-JSHC | 25/07/2020 |
EAV-0276149116 | 9kx2tknvnm | 5GNG-XENG-F89F-2PN3-GBF6 | 25/07/2020 |
EAV-0276161927 | 2xa66f3vuj | TC45-XPGX-NS46-BESN-3MG8 | 25/07/2020 |
EAV-0276167086 | 3hn6xa9axt | FFM5-XV64-CFD6-J485-W424 | 26/07/2020 |
EAV-0276168770 | cte45u32eh | WFRA-XDH9-RDPK-PJ8K-7XDG | 26/07/2020 |
EAV-0276174161 | 8jpd7p2h5e | UNEE-XWX9-5GUD-S8H4-FRMJ | 26/07/2020 |
EAV-0276185821 | e9u5b3ku3d | 65TH-XD47-M6SJ-KD43-MK9K | 26/07/2020 |
2020.08
Username | Password | ||
---|---|---|---|
EAV-0276445303 | sc5f3crfet | EGXM-XK7K-GBVE-A4H9-CC8W | 04/08/2020 |
EAV-0276479133 | a6a3desvme | DEAS-W33U-6FBU-U55N-M593 | 04/08/2020 |
EAV-0276479133 | a6a3desvme | NBXX-XJ37-AD58-JJ8S-A46K | 04/08/2020 |
EAV-0277764458 | eakxv3p3ru | JSV9-XWFA-UUWW-KECP-A494 | 04/08/2020 |
EAV-0277688761 | 7jbshxpexe | AN82-X9KC-KJWP-NUR6-DWGW | 21/08/2020 |
EAV-0277706946 | t84s2pdxc5 | D8R2-X7JH-WEJG-9CPC-AEHF | 21/08/2020 |
EAV-0277706954 | cc4c9u54r3 | DVUK-XCC5-HNC2-2DPR-RP2R | 21/08/2020 |
EAV-0277717403 | 5962399t99 | HRGU-XCGA-G2E9-PSJT-9PD9 | 22/08/2020 |
EAV-0277718288 | h3u2k8p82f | XJUN-XXN2-B5JK-TFNE-TP2M | 22/08/2020 |
EAV-0277718289 | dhap9m352u | 675S-XHWJ-92F3-CCCA-GR8G | 22/08/2020 |
EAV-0277720468 | hb4ut2uf7f | NM3M-XN7J-DPU6-7NH7-SS67 | 22/08/2020 |
EAV-0277720697 | 3mb3s3xstj | MJS3-X4WW-99G6-359T-57GC | 22/08/2020 |
EAV-0277722010 | 96rstm5cc6 | UBD3-XRC8-77JW-3SVJ-X4GP | 22/08/2020 |
EAV-0277722150 | skfu8792x4 | 6K6U-XK65-KNXP-P43C-S849 | 22/08/2020 |
EAV-0277730443 | jr79nuspep | 7FJH-XRP9-HV3N-MTSD-X95E | 22/08/2020 |
EAV-0277739813 | v6bfjhjjc5 | TGR9-XF4N-UH3D-MC87-5W4S | 23/08/2020 |
EAV-0277739818 | k24x4ufkhm | DBGC-X2BP-CU7U-KFX9-VJGE | 23/08/2020 |
EAV-0277747237 | knt9xv4xj5 | GEWG-X9A6-S3CC-FDH3-NW9C | 23/08/2020 |
EAV-0277771486 | fejvja3nra | 29PJ-XSCX-D567-DUFW-D877 | 23/08/2020 |
EAV-0277775680 | dpkbbk4666 | BFFB-XG8D-X9WS-RR2R-MP88 | 24/08/2020 |
EAV-0277776013 | 4755u7f76a | APSE-XR7R-GCHF-5CKF-R6SG | 24/08/2020 |
EAV-0277780818 | 4bkjvmfmut | AUH5-XVMV-5R9X-2966-CPF3 | 24/08/2020 |
EAV-0277785457 | dfkbjumnkt | AC2A-XS6P-38DX-6GEV-4747 | 24/08/2020 |
EAV-0277805873 | cbdk738x46 | J2RF-XGB3-PXDF-8H8N-KWUR | 24/08/2020 |
EAV-0277809439 | hu9vr7vecb | J7FC-XG3D-2HRC-9VVK-UPFT | 24/08/2020 |
EAV-0277816777 | a4vpd59ju5 | J56M-XA2H-GB8H-K6SA-TCKV | 24/08/2020 |
2020.09
Username | Password | ||
---|---|---|---|
TRIAL-0276248840 | 7juau8rx29 | 2CA4-XNUE-6RU3-N6NA-FPK7 | 18/09/2020 |
EAV-0277718288 | h3u2k8p82f | XJUN-XXN2-B5JK-TFNE-TP2M | 21/09/2020 |
EAV-0264337653 | vmvt8ehk7h | HBAK-W34B-CMRJ-JNUU-9FV8 | 30/09/2020 |
EAV-0278194543 | n4hmrb74ak | EWBX-XSMN-TWWS-VRAC-TF42 | 02/09/2020 |
EAV-0278527458 | 5hck3absep | P3W3-X5WE-WKPV-RNH9-BEUJ | 10/09/2020 |
EAV-0278528046 | hxm97dd28d | TRKE-XGVS-JANX-FF9M-NMCE | 10/09/2020 |
EAV-0279395723 | rsu4ujccaf | HC6X-XVBF-XK7W-GA5K-HM4N | 29/09/2020 |
2020.10
Username | Password | ||
---|---|---|---|
EAV-0280284226 | rmcjxaxc6h | 8F5J-X6S2-4475-8D65-F8AG | 04/10/2020 |
EAV-0280284446 | 66ana39h2n | 6WHF-XUGU-BW3M-3387-K8HH | 04/10/2020 |
EAV-0280284447 | acphck85pe | 7B2F-X9PF-GGVW-7SWT-HM99 | 04/10/2020 |
EAV-0280285257 | vds8krb2uc | 26XC-XXR8-W557-W96D-2ECH | 04/10/2020 |
EAV-0274703097 | axec3dfnbs | 7WA6-XUAM-REA4-U4DE-GDB9 | 08/10/2020 |
EAV-0279645602 | 666m3anhs7 | XJNG-X958-4F4P-DH45-KXXA | 05/10/2020 |
EAV-0279766109 | 4s4cer3dv6 | GFVP-XDXR-AWDX-9JTM-PTWA | 07/10/2020 |
EAV-0279776202 | utt2k4ne2s | JS83-XUU8-MP2X-SBJ2-3M93 | 08/10/2020 |
EAV-0279862278 | dfa9cv4c39 | MVN8-XEN5-9G6A-TSVE-5TJM | 10/10/2020 |
EAV-0279999707 | 3avaxrc34k | 25XG-XPNN-TBPE-XVMS-JD52 | 12/10/2020 |
EAV-0280393914 | 32tdsfbu43 | D9V8-XU2A-KWSF-VMH8-265V | 21/10/2020 |
EAV-0280396435 | 2295jkr48v | S8RX-XB8A-C6FT-DUJ6-5C76 | 21/10/2020 |
EAV-0280438564 | hu9bd78smp | VC5J-XRWD-N2ND-7662-4GJ6 | 22/10/2020 |
EAV-0280438573 | aaabr5x66k | JEK2-XHW3-9U5G-MT4X-XGGB | 22/10/2020 |
EAV-0280452445 | tu5cp3mv88 | 8P46-XSVH-7K9B-XABA-MBHA | 22/10/2020 |
EAV-0280486307 | ufmm8s9akr | K3DS-X22G-EGTJ-JJ3N-TEGU | 23/10/2020 |
EAV-0280563347 | m5xkt6ebuh | NC9H-XA4X-K8XR-PP5S-2KAN | 25/10/2020 |
EAV-0280303329 | ec8ksd2u4n | AHWH-XSBE-ESRP-DSTD-X32P | 06/10/2020 |
EAV-0280537593 | 62v34xn4pu | SR84-X73E-TVMM-KJRR-3XDE | 10/10/2020 |
EAV-0280537594 | rvc69p7pee | SH2A-X59V-PFJ5-RXHH-DGSH | 10/10/2020 |
2020.11
Username | Password | ||
---|---|---|---|
EAV-0277116938 | 92cesc7prf | 578B-X7WD-X9PA-C54S-CNTN | 18/11/2020 |
EAV-0277117084 | k8uadkp6s3 | 8NDC-X32K-GFNG-3JTP-UCB7 | 18/11/2020 |
EAV-0277117086 | t9ujv56ruu | FJ5H-X7F5-4A32-BJ7V-KD2H | 18/11/2020 |
EAV-0277117089 | hcc86u5jst | 9VJ9-X9XR-UBDH-4AC6-GXHD | 18/11/2020 |
EAV-0277117090 | jueu4k6p6u | WRPH-X3TK-HNP8-HHPR-WDTG | 18/11/2020 |
EAV-0277117228 | brn5h264xu | ECJC-X559-TN6X-B59P-8A35 | 18/11/2020 |
EAV-0279980237 | vcadamp3vb | DEAS-W33W-4TDR-RPFV-52S7 | 11/11/2020 |
EAV-0280095941 | jdt73hkaff | DEAS-W33W-4BRH-HBGR-UWX9 | 13/11/2020 |
EAV-0280096105 | kepb356xc2 | DEAS-W33W-4APW-W8PC-6C3P | 13/11/2020 |
EAV-0280127322 | a7p6exp24b | DEAS-W33W-4FUH-H5FG-G2DM | 14/11/2020 |
EAV-0280176067 | 6vk29u6bf3 | DEAS-W33W-4AD3-3V46-26T4 | 15/11/2020 |
EAV-0280211547 | sas8n9m8xr | DEAS-W33W-44U7-7TRB-F4KC | 16/11/2020 |
EAV-0280223509 | hfpan4da5t | DEAS-W33W-4TAA-AXEX-NAET | 16/11/2020 |
EAV-0280251097 | 2k5axt38bj | DEAS-W33W-4AUG-G3UE-S8AW | 17/11/2020 |
EAV-0280265505 | u5en2km68c | DEAS-W33W-4SS3-3EGS-GV2F | 17/11/2020 |
EAV-0280300731 | r76fj3vhmm | DEAS-W33W-4U7D-D482-C9PS | 18/11/2020 |
EAV-0280310932 | n3pausc5sc | DEAS-W33W-4AG4-4V75-VEC4 | 18/11/2020 |
EAV-0280351649 | bfa9n3t5mv | DEAS-W33W-4FMS-S7ES-KGMB | 19/11/2020 |
EAV-0280369923 | n6hd9xn4jh | DEAS-W33W-4R8S-SHFR-MHK4 | 19/11/2020 |
EAV-0280239512 | th7ccruuad | DEAS-W33W-4AJG-GTNR-CDX5 | 17/11/2020 |
EAV-0280241812 | 2u82kumaam | DEAS-W33W-4AB9-9UBR-B4J6 | 17/11/2020 |
EAV-0280241942 | emkrsbmh3c | DEAS-W33W-4WNB-BGGB-H7RB | 17/11/2020 |
EAV-0280249034 | f23rs6cemn | DEAS-W33W-425M-MVNP-XPW7 | 17/11/2020 |
EAV-0280249871 | ktmsxercrt | DEAS-W33W-49UH-HKPJ-MVJA | 17/11/2020 |
EAV-0280251095 | xaacd4vkcb | DEAS-W33W-4U5F-FXAP-MXM9 | 17/11/2020 |
EAV-0280257206 | 529bfpt4x9 | DEAS-W33W-4ABS-S8PU-VWBU | 17/11/2020 |
EAV-0280266591 | t9983m8csv | DEAS-W33W-4PE5-5TD3-7GFM | 17/11/2020 |
EAV-08927705 | ut7rpv8mdp | B93V-X67N-XSFJ-DK9R-3X74 | 01/11/2020 |
2020.12
Username | Password | ||
---|---|---|---|
EAV-0277877609 | vpse6dx52s | BE78-XARD-A2GH-AXEN-NRMF | 05/12/2020 |
EAV-0277877610 | npacx32d45 | GTAK-XK34-4CT3-G6GJ-WGWP | 05/12/2020 |
EAV-0277877616 | macvskf9jk | 9X53-XXDX-H59H-MNR4-EETK | 05/12/2020 |
EAV-0277877875 | hjpj3hff5s | 4MRK-XJRG-J8WH-K79W-2JBC | 05/12/2020 |
EAV-0280070576 | 23fecv9ue3 | DEAS-W33W-4JHW-W2XH-T97J | 13/12/2020 |
EAV-0280396421 | 2chmpxf23d | MTE5-X9XX-F2JX-8X3V-W7R8 | 20/12/2020 |
EAV-0280457464 | r8vbp3mdhu | DEAS-W33W-43DT-TRF3-UWJA | 21/12/2020 |
2021.01
Username | Password | ||
---|---|---|---|
EAV-0277877608 | j2evns6add | GVBU-X258-RPE6-9V4P-GDEC | 04/01/2021 |
You are viewing this page in an unauthorized frame window.
CVE-2006-6676 Detail
This vulnerability has been modified since it was last analyzed by the NVD. It is awaiting reanalysis which may result in further changes to the information provided.
Current Description
Integer overflow in the (a) OLE2 and (b) CHM parsers for ESET NOD32 Antivirus before 1.1743 allows remote attackers to execute arbitrary code via a crafted (1) .DOC or (2) .CAB file that triggers a heap-based buffer overflow.
View Analysis Description
Analysis Description
Integer overflow in ESET NOD32 Antivirus before 1.1743 allows remote attackers to execute arbitrary code via a crafted .DOC file that triggers a heap-based buffer overflow.
Evaluator Solution
This vulnerability is addressed in the following product update: Eset Software, NOD32 Antivirus, 1.1743
References to Advisories, Solutions, and Tools
By selecting these links, you will be leaving NIST webspace. We have provided these links to other web sites because they may have information that would be of interest to you. No inferences should be drawn on account of other sites being referenced, or not, from this page. There may be other web sites that are more appropriate for your purpose. NIST does not necessarily endorse the views expressed, or concur with the facts presented on these sites. Further, NIST does not endorse any commercial products that may be mentioned on these sites. Please address comments about this page to nvd@nist.gov.
Weakness Enumeration
CWE-ID | CWE Name | Source |
---|---|---|
CWE-189 | Numeric Errors |
Change History
2 change records found show changesCVE Modified by MITRE10/17/2018 5:49:22 PM
Action | Type | Old Value | New Value |
---|---|---|---|
Added | Reference | http://www.securityfocus.com/archive/1/454949/100/0/threaded [No Types Assigned] | |
Added | Reference | http://www.securityfocus.com/archive/1/455045/100/0/threaded [No Types Assigned] | |
Removed | Reference | http://www.securityfocus.com/archive/1/archive/1/454949/100/0/threaded [Patch, Vendor Advisory] | |
Removed | Reference | http://www.securityfocus.com/archive/1/archive/1/455045/100/0/threaded [No Types Assigned] |
Initial CVE Analysis12/21/2006 12:10:00 PM
Action | Type | Old Value | New Value |
---|
Quick Info
CVE Dictionary Entry:CVE-2006-6676
NVD Published Date:
12/20/2006
NVD Last Modified:
10/17/2018
Source:
MITRE
What’s New in the NOD32 AntiVirus 12 patch Archives?
Screen Shot

System Requirements for NOD32 AntiVirus 12 patch Archives
- First, download the NOD32 AntiVirus 12 patch Archives
-
You can download its setup from given links: