Our Test Criteria
In this article, you will find information about the tests we run on our test models and how we process the results.
Working For Notebookcheck
Are you a techie who knows how to write? Then join our Team!
News Editor - Details here
Our editors have years of experience and have tested numerous notebooks from various manufacturers. We used this extensive experience to create a catalog of test criteria which cover all purchase relevant aspects of a device. Consumer's demands have changed over the years and technologies evolved. As such, we must also constantly update and add to our test criteria so as to give the reader the information they are looking for. As a reader, you can contribute greatly with your input. We are always happy to receive an email or a new post in our forum. Our editors and a long line of helpful moderators are available and will gladly take your tips.
The following are the main aspects of every review performed by Notebookcheck.net (explanations are included):
» Top 10 Multimedia Laptops
» Top 10 Budget Multimedia Laptops
» Top 10 Gaming Laptops
» Top 10 Budget Gaming Notebooks
» Top 10 Lightweight Gaming Laptops
» Top 10 Business Laptops
» Top 10 Budget Office Laptops
» Top 10 Workstation Laptops
» Top 10 Subnotebooks
» Top 10 Ultrabooks
» Top 10 Chromebooks
» Best Laptop Displays
» Best Laptops for University Students
» Top alternatives to the Apple MacBook Pro 13
» Top alternatives to the Apple MacBook Pro 15
» Top alternatives to the MacBook 12/Air
» Top 10 Laptops for Picture and Video Editing
The case of our test model is examined very closely. We judge the test model with the following criteria: design (colors, shape, material, feel, measurements, weight, etc.), build quality (gaps, finish, edges, precision, how secure every component sits, etc.), sturdiness (how the notebook reacts to pressure - at a single point, on the entire surface, torsion resistance, etc.), hinges (power, how well they hold the display, longevity, etc.), maintenance (possible upgrades which can be performed by the user, cleaning of the device/cooler fan, etc.). The respective editor rates the build quality according to his judgment by comparing it to previously tested models. The rating is discussed with the editorial team and the respective divisional director.
We judge the ports and interfaces available on the test model and their positioning in consideration of the device category.
If a test model features an SD-card reader, we test it to estimate the transfer rates you can expect. For this purpose, we use our respective reference SD cards, which are currently a Toshiba Exceria Pro SDXC UHS-II for full-size readers and a Toshiba Exceria microSDHC UHS-I for devices with MicroSD slots. We test for the maximum transfer rate you can expect when copying big data blocks (e.g. videos) from the SD card to the test device by using the AS SSD Seq. Read test and for the expected transfer rate when copying many images (about 1 GB .jpg files; about 5 MB each, standardized test files), which is usually significantly lower than the maximum transfer rate. The following figures show the current minimum and maximum values as well as the average as available in our database (as of 07/2016).
We evaluate the communication features, such as LAN, Wi-Fi, Bluetooth, 3G, 4G, etc. In addition to a real-world test of the telephone features in smartphones, we also run a standardized Wi-Fi test.
In a standardized test setup, we determine the maximum transfer rates (sending and receiving) when connected to our Linksys EA8500 reference router (in 1 m distance) with the iperf3 (attributes: -i 1 -t 30 -w 4M -P 10 -O 3) software. This test uses the fastest transfer standard supported by the test model. Below you can see three examples with the current min, avg. and max values from our tests (May 2017).
WiFi performance: receive
WiFi performance: transmit
We run a real-world test on mobile devices which come with a GPS module. While the editor cycles along a route, he records it with the test model and our reference navigation device. A comparison of the data allows us to judge precision and reliability of the incorporated GPS module. In addition, we record the GPS signal inside and outside of buildings.
Quality journalism is paid by advertising. We show the least amount of ads possible. Adblock users see more ads. Please, switch off ad blockers.
We test the front camera (webcam/selfie camera) and the rear camera (primary camera of smartphones) and evaluate the quality by comparing the image to standardized reference cards and the image quality of current flagship models. Among others we judge sharpness, colors, contrast, sensitivity to light and video features available.
The following example compares the image quality of the Samsung Galaxy J5 2016 to the reference camera's and other models'. The main image allows selecting a part for comparison with other devices by moving the mouse pointer over it.
Choose a scene and navigate within the first image. One click changes the position on touchscreens. One click on the zoomed-in image opens the original in a new window. The first image shows the scaled photograph of the test device.Scene 1Scene 2Scene 3
The following criteria are taken into consideration: keyboard - keyboard layout (positioning, size, grouping, function keys, inscription, etc.), typing experience (key travel, pressure point, stroke, noise, etc.), additional keys if available; touchpad - response (surface, multi-touch, etc.), mouse keys (use, noise, etc.), touch display - response (precision, reaction time, etc.), virtual keyboard (layout, feedback, response, key size, etc.), sensors, digitizer if available (capacitive display).
The following criteria are considered for the display rating based on the measurement results: resolution and format (pixel density, clarity of the display, ease of use with multiple windows, etc.), display brightness [cd/m²] (maximum, minimum, mains operation/battery mode, etc.), brightness distribution (dark areas, bleeding, etc.), contrast (max., black value, etc.), colors (DeltaE ColorChecker, Grayscale), covered color space (sRGB, AdobeRGB98), viewing angles, glare, PWM, response time.
Brightness, brightness distribution, and contrast
We use an X-Rite i1 Pro 2 photo spectrometer together with the latest version of the CalMAN Ultimate software for our display measurements. The measurement of the brightness is taken after the screen has stayed 100% white for a period of 10 minutes. Device settings, such as automatic adjustment of brightness, are deactivated and the color profile of the device has factory settings (not user defined).
The black value is also measured after a time period of 10 minutes, during which time the screen is 100% black (@max. brightness). The measurements are then taken from the central area of the screen while in a completely dark room. These measurements are used by us to calculate the maximum contrast of the display in the central area of the screen. We calculate the illumination by comparing the brightest segment of the screen to its darkest quadrant.
Outdoor use is part of the display test. We check to see how well the display can display content (legibility, reflections, etc.) in bright surroundings (3000-10000 cd/m² - cloudy to sunny). The list of criteria for this test is: type of display surface (matte panels prevent reflections), brightness of the picture, and contrast of the picture.
Colors: out-of-the-box vs. calibrated
The naked eye can hardly detect color deviations from its ideal (sRGB) if the DeltaE is smaller than 3. This is already very difficult for a DeltaE smaller than 5, while differences are increasingly notable at higher deviations (see following figure, actual - target). Hence, devices for editing graphics, images and videos should have a DeltaE smaller than 3 and support calibration well.
In addition, we calibrate devices meant for graphics work and measure the color precision again. You can download the created .icc profile from the review page.
Apart from precise colors, the covered color space is important for professional graphics and image editing. We check the coverage of the sRGB and AdobeRGB98 color spaces. For this, we use the .icc file created by the iProfiler and the Argyll software.
We measure how fast the display is able to change from white to black (0% to 100%) and gray to gray (50% to 80%) with an oscilloscope and a switchable gain, amplified silicon detector from ThorLabs (PDA100A-EC). Fast response times are particularly important for displays designed for gaming. For example, such displays feature response times of only a few milliseconds. With the same setup, we also test if PWM is used to dim the diplay and up to which brightness setting. Low PWM values (e.g. 200 Hz) can lead to sore eyes, headache and eye fatigue for some users. Some even claim to notice high PWM in kHz range, altough we lack scientific evidence for this.
Display Response Times
|↔ Response Time Black to White|
|17 ms ... rise ↗ and fall ↘ combined||↗ 4 ms rise|
|↘ 13 ms fall|
|The screen shows good response rates in our tests, but may be too slow for competitive gamers.|
In comparison, all tested devices range from 0.8 (minimum) to 240 (maximum) ms. » 12 % of all devices are better.
This means that the measured response time is better than the average of all tested devices (26.3 ms).
|↔ Response Time 50% Grey to 80% Grey|
|25 ms ... rise ↗ and fall ↘ combined||↗ 9 ms rise|
|↘ 16 ms fall|
|The screen shows relatively slow response rates in our tests and may be too slow for gamers.|
In comparison, all tested devices range from 0.9 (minimum) to 636 (maximum) ms. » 8 % of all devices are better.
This means that the measured response time is better than the average of all tested devices (42.2 ms).
Among others, different display technologies differ in viewing-angle stability. Currently, IPS displays, which allow very flat angles without image distortions, are wide-spread in higher-priced notebooks and particularly in tablets and smartphones. Cheap notebooks often use TN displays, which tend to be very dependent on the viewing angle, but they deliver better response times. We test the viewing-angle stability of the notebook subjectively (using the test model) and by turning the display to fixed angles (45 ° turns). For this test, the display is photographed at a fixed shutter speed and aperture inside a dark room.
Our performance tests vary depending on the device class and the expected use. This section includes a variety of benchmarks (which either test a single component or the entire system and present the result in points), and real-world tests with various programs and games, since these often stress the processor and graphics card extremely.
Prior to the tests, the system is brought up-to-date with Windows updates. In addition, the latest graphics drivers are installed on devices with dedicated graphics solutions if the system actively suggests doing so. We do not change the clock rates unless we actually mention that we did so for demo purposes. Updating the graphics drivers after our tests and possible modifications can certainly boost gaming and benchmark performance. In our opinion, it is the duty of the manufacturer to provide customers with the latest drivers out-of-the-box and make useful updates as easy as possible for the user.
We consider the following aspects for evaluating the performance: CPU (Cinebench, Turbo analysis, etc.), system (PCMark), storage device (HD Tune, CrystalDiskMark, AS SSD, etc.), GPU (3DMark, Unigine Heaven, etc.), gaming (a selection of current games and popular older games).
- Comparison of mobile graphics cards - detailed information, technical details and performance values of all available mobile graphic cards - sorted by performance levels.
- Benchmark list of mobile graphics cards - table which allows sorting of all available graphics cards by various benchmarks and technical details.
- Smartphone graphics card benchmark list - sortable table including all available smartphone graphics solutions.
- Benchmark list of mobile CPUs - table which allows sorting of all available mobile CPUs by various benchmarks and technical details.
- Smartphone processor list - table with processors primarily used in smartphones.
- Games list - which GPU can run a certain game at what fps.
- Notebook SSD and HDD benchmarks - table with many benchmarks of Solid State Drives and hard drives.
We use a noise level gauge (Audix TM1), the measurement software ARTA and a standard test setup to measure the emissions of a test model. The gauge is fixed 15 cm from the notebook and is secured against vibrations emanating from the test model. The measurements are taken in dB(A) (Decibel). The following are our test categories:
Minimum: minimum noise emission while laptop is idle (Windows power plan: "Energy Saving")
Medium: average noise emission recorded while laptop is idle (power plan: "Energy Saving")
Maximum: highest noise emission measured while the laptop is idle (power plan: "High Performance")
Medium: average noise emission while the computer is running at high level of system use (3DMark06, power plan: "High Performance")
Maximum: highest possible noise emission while the system is under heavy load (power plan: "High Performance", 100% CPU and GPU usage - thanks to Prime95 and Furmark)
The following may help the reader better understand the results:
In a quiet room, the human ear can hear background noise, which should amount to around 28 dB. A conversation at a normal volume ranges at 60 dB. All these values are dependent on the distance from the source of the noise. This is why we fix our gauge into place at a constant distance from our test models. This allows us to get clear results which can be compared with each other. The measurements are presented graphically and can be judged subjectively (deviations caused by different frequencies are possible):
- Under 30 dB: barely audible.
- Up to 35 dB: audible but not distracting. Ideal level of noise emission for a laptop running office programs.
- Up to 40 dB: clearly audible, and might be distracting after a while.
- Up to 45 dB: might disturb the user if they are in a quiet room. Still acceptable while playing games.
- Over 50 dB: notebook emissions over this level are uncomfortably loud.
In addition to our measurements, we record a frequency diagram of each fan level. This allows judging whether the perceived noise is rather low or high frequency. In our measurements, the audible range starts from about 125 Hz and reaches up to about 16000 Hz depending on the volume.
The distribution of surface temperature (which can be felt by the user directly) is measured with an infrared thermometer (Raytek Raynger ST or similar) which never touches the test model. The top and bottom of the notebook is split into nine quadrants, and the maximum measurable temperature in each quadrant is recorded.
The measurements are taken after an idle period of 60 minutes and a stress period of 60 minutes (100% CPU and GPU usage - Prime95 and Furmark).
In addition we also closely observe the GPU and CPU during the stress test with software (tools: HWInfo64, HWMonitor, GPUz, etc.) and note any significant variations in performance (drops due to throttling).
The following scale describes the categories we put our measurements in:
- Less than 30 °C: Barely noticeable increase in temperature.
- 30 - 40 °C: Temperature rises noticeably but is bearable.
- 40 - 50 °C: Contact with the notebook over a long period of time at these temperatures will be uncomfortable.
- Over 50 °C: Very hot. Problematic if using the notebook on the lap.
In addition to the mentioned maximum values we create thermographic images (Flir One), which depict the distribution of the surface temperatures in a continuous way.
We rate the speakers based on sound quality and performance at maximum volume. Once again we use a measurement microphone (Audix TM1), the measurement software ARTA and a standardized setup for our measurements. We use Pink Noise as an output file, measured at maximum volume and we add measurements at lower volume when we detect overdrive.
Our frequency diagrams allow comparing different devices to each other and selecting / deselecting each frequency curve by means of checkboxes.
Apart from the battery life, the power consumption of a notebook is measured in various scenarios (power adapter sided). The test settings for each scenario are as follows:
- Idle: power consumption while the notebook is idle.
Minimum: all additional modules are off (Wi-Fi, Bluetooth, etc.), minimum brightness, and Windows power plan is set to "Energy Saving".
Medium: maximum brightness, additional modules off, Windows power plan: "Balanced".
Maximum: maximum power consumption while notebook is idle. All modules are on (Wi-Fi, Bluetooth, etc.), maximum brightness and power plan set to "High Performance".
- Load: notebook runs with maximum brightness, all modules on and power plan set to "High Performance".
Medium: For this test we used 3DMark06 and record the average power consumption in the first part of the test.
Maximum: stress test with 100% CPU and GPU load using Prime95 and Furmark benchmarks. Maximum power consumption possible on the test model.
In Android devices, we use the app "Stability Test" CPU+GPU for Maximum, Classic for Medium.
In IOS systems we use the Epic Citadel app in demo mode.
Currently, we use the multimeter Metrahit Energy from Gossen Metrawatt as measurement device. It even makes e.g. measuring of standby power consumptions of smartphones possible thanks to simultaneous measurement of TRMS current and voltage and high precision.
We run our test models through 4 different tests:
- Minimum runtime: we use the "Classic" test of Battery Eater Classic to measure the minimum runtime of the test model. For this test, the screen brightness is set to maximum and all communication modules, such as WLAN, Bluetooth, etc. are turned on. Additionally, the Windows power plan is set to "High Performance".
For our Android-based test models we use the app "Stability Test" to judge the minimum runtime. If the app does not run on the device (due to compatibility issues) then we run a 3D game which simulates high load, thus, allowing us to measure the minimum runtime of the test model.
For IOS-based devices we use the app Epic Citadel in demo mode.
- Maximum runtime: the "Reader's" test of the Battery Eater tool is used to measure the maximum runtime of the test model. The brightness is set to minimum and all power-saving options are turned on. The Windows power plan is set to "Power Saver" and WLAN and Bluetooth are switched off.
Android-based devices are tested with a script which loads text pages from the site: http://www.notebookcheck.com/fileadmin/Notebooks/book1.htm
- DVD playback: runtime while the laptop is playing a DVD with maximum brightness, WLAN and Bluetooth off, and power-saving options turned on (such as the Windows "Power Saver" or higher - whichever is necessary for fluid playback of the DVD).
The reader should take into account the fact that our test models are usually new laptops. This means that the battery of the laptop will have to be emptied and recharged a few times before it can deliver its peak battery life. Furthermore, our tests provide results which are taken over a relatively brief period of time. More information on how to optimize the battery life of your laptop can be found here: FAQ article.
Each test model receives ratings for each section mentioned in this article and is also given a rating after being compared to other models of the same class. The final rating is influenced by 12 points and the influence each point has on the final rating varies from class to class: netbook, gaming notebook, etc. We present the rating on a scale of 0-100% (higher is better).
Finally, a total rating is calculated with the different impact of each aspect depending on the device class (weighting).
The various rating criteria (the case and input device ratings are excluded) are processed with a special algorithm, which uses the various measurements and benchmark data in our database to deliver the result.
More information about the rating system can be found here.