# AMD A10-7860k APU no graphics acceleration

## gr3m1in

Hi everyone!

Need some assistance with getting the acceleration back, currently i have no 2D/3D acceleration and even no KDE Plasma's effects.

I have AMD A10-7860K with Kaveri [Radeon R7 Graphics] (rev d6) and previously I have had it running with radeon+radeonsi.

Of course I have read and followed https://wiki.gentoo.org/wiki/Radeon and https://wiki.gentoo.org/wiki/AMDGPU and lots of others with no luck.

I'm not new to gentoo, so what did I try:

1. Switching between kernel versions, 4.4 LTS, 4.9 LTS, 4.11.3 latest were tried.

2. Switching between drivers (radeon+radeonsi) and (amdgpu+radeonsi with AMDGPU's SI and CIK parts)

3. Re-emerging everything somehow related to the problem.

Drivers were always compiled into kernel.

Obviously I have the firmware compiled into kernel also.

KMS is enabled, framebuffers are disabled.

At boot I have a properly looking framebuffer in all cases.

Xorg starts with no error, but has a long delay (about 10 sec) with black screen after openrc has finished and before I see sddm login screen.

No related errors were found in dmesg, messages or Xorg.0.log.

Here is dmesg

https://pastebin.com/JQf6Uzw7

And here is Xorg's log

https://pastebin.com/Bf0CWNVL

I will post any additional logs and configs on demand, feel free to ask.

Many thanks in advance.

----------

## Zucca

Strange...

I need to have "amdgpu radeon radeonsi" in my VIDEO_CARDS to get hw acceleration. Then I use amdgpu driver with X.

Could you post your "glxinfo -B" output?

----------

## gr3m1in

I also have 

```
VIDEO_CARDS="amdgpu radeonsi radeon"
```

 and so all packages are compiled with support of this.

To switch between drivers I blacklist unwanted one via grub 

```
modprobe.blacklist=radeon
```

 or via decompiling it from kernel.

Here is the output of  *Quote:*   

> glxinfo -B

 

https://pastebin.com/r1LBjC1j

Which graphics card or APU model do you use?

Did you tune your xorg.conf?

----------

## Ant P.

glxinfo claims you have a working hardware driver but those "pp:" warnings shouldn't happen. Do you have a drirc with messed up settings somewhere?

----------

## gr3m1in

Yep, i have seen this, however I didn't modify that file...

Also I've googled it, without anything useful, e.g. https://bugs.freedesktop.org/show_bug.cgi?id=99549

Here is it's current state https://pastebin.com/vgNPrznu

Do I have to change something in it?

----------

## Zucca

 *gr3m1in wrote:*   

> IWhich graphics card or APU model do you use?
> 
> Did you tune your xorg.conf?

 I actually have R9 Nano. But it required the "radeon" in VIDEO_CARDS. Yes. I did tune my xorg.conf...

Don't blacklist the radeon module. Also keep the radeon X11 driver installed (it should be pulled because of your VIDEO_CARDS contains radeon). Create xorg.conf and tell X to use amdgpu X11 driver specifically.

```
Section "Device"

        Identifier "Nano"

        Driver "amdgpu"

        

   Option      "vblank_mode"      "on"

   Option      "EnablePageFlip"   "off"

   Option      "SwapBuffersWait"   "on"

   Option      "EXAVsync"      "on"

        Option      "DRI"         "3"

   Option      "TearFree"      "on"

EndSection
```

... that config (I included only relevent parts) is remnants of my multi seated setup... But I found that it works on my current setup as well. I don't remember what those options do anymore... :P But you can try them too. :)

----------

## gr3m1in

Blacklisting "radeon" or "amdgpu" is mentioned as to be blacklisted at kernel level (not Xorg level), otherwise an unexpected kernel driver will be used for the graphics card during boot.

In my case, having both of those, "radeon" will always be used unless blacklisted, so it is the only way to force amdgpu to be used in case of need.

Your Radeon R9 branch is officially supported by AMDGPU, since it is a GCN 1.2 (AFAIK), while R7 is GCN <1.2 and is currently "experimental" with amdgpu.

Officially my APU should be handled by "radeon" driver with "radeonsi" enabled.

Also, why did you enable the "EXAVsync" setting?

EXA is supposed to be used with pretty old cards...

----------

## Zucca

 *gr3m1in wrote:*   

> Also, why did you enable the "EXAVsync" setting?
> 
> EXA is supposed to be used with pretty old cards...

 Because *Zucca wrote:*   

> remnants of my multi seated setup... But I found that it works on my current setup as well. I don't remember what those options do anymore... :P

 ... but now that you mentioned it I might as well cut that line.

As for your problem... If amdgpu is really at "experimental" stage for your GPU it might be best to try to find to solution while using radeon drivers only. The errors that came from glxinfo -B are unknown to me... :(

Have you tried to run something like glxgears? How does it perform?

----------

## gr3m1in

The glxgears output

1. with default window size

```

pp: Failed to translate a shader for depth1fs

pp: Failed to translate a shader for blend2fs

pp: Failed to translate a shader for color1fs

pp: Failed to translate a shader for blend2fs

16142 frames in 5.0 seconds = 3228.324 FPS

16703 frames in 5.0 seconds = 3340.546 FPS

16687 frames in 5.0 seconds = 3337.397 FPS

16584 frames in 5.0 seconds = 3316.795 FPS

16641 frames in 5.0 seconds = 3328.056 FPS

16468 frames in 5.0 seconds = 3293.538 FPS

16600 frames in 5.0 seconds = 3319.894 FPS

16746 frames in 5.0 seconds = 3349.096 FPS

16800 frames in 5.0 seconds = 3359.810 FPS

```

2. with maximized window size

```

pp: Failed to translate a shader for depth1fs

pp: Failed to translate a shader for blend2fs

pp: Failed to translate a shader for color1fs

pp: Failed to translate a shader for blend2fs

1047 frames in 5.0 seconds = 209.389 FPS

1049 frames in 5.0 seconds = 209.637 FPS

1049 frames in 5.0 seconds = 209.639 FPS

1047 frames in 5.0 seconds = 209.349 FPS

1049 frames in 5.0 seconds = 209.659 FPS

1049 frames in 5.0 seconds = 209.694 FPS

1049 frames in 5.0 seconds = 209.613 FPS

1049 frames in 5.0 seconds = 209.626 FPS

```

when it is expected to be times higher...

And yes, currently I'm running radeon driver

```

00:01.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Kaveri [Radeon R7 Graphics] (rev d6)

        Subsystem: Micro-Star International Co., Ltd. [MSI] Kaveri [Radeon R7 Graphics]

        Kernel driver in use: radeon

```

with radeonsi enabled

```

VIDEO_CARDS="amdgpu radeonsi radeon"

```

----------

## Zucca

Hm. It still might be the bug in mesa (which you linked peviously).

Maybe try out those tips? Downgrading mesa or disabling every post processing thing in driconf might solve the issue.

----------

## gr3m1in

That helped indeed!

At least via command line i've got the following:

1. with default window size 

```
$ PP_DEBUG=1 pp_jimenezmlaa=0 pp_jimenezmlaa_color=0 vblank_mode=0 glxgears

ATTENTION: default value of option pp_jimenezmlaa overridden by environment.

ATTENTION: default value of option pp_jimenezmlaa_color overridden by environment.

ATTENTION: option value of option pp_jimenezmlaa ignored.

ATTENTION: option value of option pp_jimenezmlaa_color ignored.

ATTENTION: default value of option vblank_mode overridden by environment.

Initializing the post-processing queue.

34891 frames in 5.0 seconds = 6978.103 FPS

35727 frames in 5.0 seconds = 7145.320 FPS

35587 frames in 5.0 seconds = 7117.357 FPS

35102 frames in 5.0 seconds = 7020.262 FPS

35121 frames in 5.0 seconds = 7024.083 FPS

35564 frames in 5.0 seconds = 7112.633 FPS

35888 frames in 5.0 seconds = 7177.521 FPS

35856 frames in 5.0 seconds = 7171.044 FPS
```

2. with maximized window size 

```
$ PP_DEBUG=1 pp_jimenezmlaa=0 pp_jimenezmlaa_color=0 vblank_mode=0 glxgears

ATTENTION: default value of option pp_jimenezmlaa overridden by environment.

ATTENTION: default value of option pp_jimenezmlaa_color overridden by environment.

ATTENTION: option value of option pp_jimenezmlaa ignored.

ATTENTION: option value of option pp_jimenezmlaa_color ignored.

ATTENTION: default value of option vblank_mode overridden by environment.

Initializing the post-processing queue.

5473 frames in 5.0 seconds = 1094.505 FPS

5496 frames in 5.0 seconds = 1099.142 FPS

5488 frames in 5.0 seconds = 1097.539 FPS

5502 frames in 5.0 seconds = 1100.372 FPS

5502 frames in 5.0 seconds = 1100.323 FPS

5495 frames in 5.0 seconds = 1098.990 FPS
```

Now I'll update drirc and make some tryouts after reboot, then post the results...

----------

## gr3m1in

I've added "Default" application at the top of "device" section as follows 

```
<driconf>

    <!-- Please always enable app-specific workarounds for all drivers and

         screens. -->

    <device>

        <application name="Default">

            <option name="pp_jimenezmlaa"       value="0" />

            <option name="pp_jimenezmlaa_color" value="0" />

            <option name="vblank_mode"          value="0" />

            <option name="pp_celshade"          value="0" />

            <option name="pp_noblue"            value="0" />

            <option name="pp_nored"             value="0" />

            <option name="pp_nogreen"           value="0" />

        </application>

        ... some default stuff

    </device>

</driconf>
```

Post-processing errors are gone, and glxgears results remain higher as shown in previous post, but are still few times lower than expected, since previously it was about 4k in maximized window.

Also I have downgraded x11-drivers/xf86-video-ati from 7.9.0 to 7.8.0 as it was mentioned at Arch forum.

However games under wine (e.g. WoT) are still slow and laggy, and KDE Plasma effects still don't work with any OpenGL option, only work with XRender.

Did I miss something else?

----------

## Zucca

I haven't found any information how to solve this but I played around with gallium hud...

Running this monster of a command

```
PP_DEBUG=1 pp_jimenezmlaa=0 pp_jimenezmlaa_color=0 vblank_mode=0 GALLIUM_HUD=".y250.w250.dfps,.c100GPU-load,.dVRAM-usage+requested-VRAM,.c100cpu0+cpu1+cpu2+cpu3+cpu4+cpu6+cpu6+cpu7:100" glxgears
```

... should load my GPU all the way to 100%, but instead it floats around 50-55% range. Also radeontop tells the same story.

It used to be at 100%. I don't know if this any related... Also CPU is not capping the performace.

Also strange:

```
ATTENTION: default value of option pp_jimenezmlaa overridden by environment.

ATTENTION: default value of option pp_jimenezmlaa_color overridden by environment.

ATTENTION: option value of option pp_jimenezmlaa ignored.

ATTENTION: option value of option pp_jimenezmlaa_color ignored.

ATTENTION: default value of option vblank_mode overridden by environment.

ATTENTION: option value of option vblank_mode ignored.

Initializing the post-processing queue.

14087 frames in 5.0 seconds = 2817.219 FPS
```

... while it still seems that those values are not ignored. I could easily set vblank_mode to 3 and fps drops to 60.

Do you happen to have the same behaviour?

EDIT: Also on my setup changing the window size does not seem to affect the fps...

----------

## gr3m1in

In my case, when run with this command

```
GALLIUM_HUD=".y250.w250.dfps,.c100GPU-load,.dVRAM-usage+requested-VRAM,.c100cpu0+cpu1+cpu2+cpu3:100" glxgears
```

since my APU has 4 CPU cores and pp-related options were already set with drirc, I faced lower FPS than without gallium hud specified.

Here is a charts' capture:

https://gr3m1in.com/static/public/glxgears-w-gallium-hud.png

It is interesting that when glxgears window lost focus (when screen capturing program opened and grabbed the focus) fps have slightly rised...

Then I realized that all the previous tests were run with console focused and so I re-ran the test without hud options keeping glxgears' window focused and maximized with this results

```
$ glxgears 

3182 frames in 5.0 seconds = 636.270 FPS

3182 frames in 5.0 seconds = 636.314 FPS
```

and with glxgears maximized and console focused

```
$ glxgears 

5487 frames in 5.0 seconds = 1097.330 FPS

5488 frames in 5.0 seconds = 1097.399 FPS
```

Why and how does having focus affect fps?

BTW, do you have your mesa compiled with "osmesa" flag enabled?

I do, so could off-screen rendering enabled cause this difference?

My mesa's uses:

```
classic d3d9 dri3 egl gallium gbm gles1 gles2 llvm nptl opencl openmax osmesa vdpau vulkan wayland xa xvmc -bindist -debug -pax_kernel -pic -selinux -unwind -vaapi -valgrind
```

And in advance - no, disabling vulkan use does not change anything...

----------

## Zucca

```
-abi_x86_32 -bindist +classic +d3d9 -debug +dri3 +egl +gallium +gbm +gles1 +gles2 +llvm +nptl +opencl +openmax +osmesa -pax_kernel -pic -unwind +vaapi -valgrind +vdpau -video_cards_i915 -video_cards_i965 -video_cards_imx -video_cards_intel -video_cards_nouveau -video_cards_r100 -video_cards_r200 -video_cards_r300 -video_cards_r600 +video_cards_radeon +video_cards_radeonsi -video_cards_vmware -vulkan -wayland +xa +xvmc 
```

But at least your GPU is going full throttle.

At this point I can only recommend to increase the debugging info of everything between (kernel) module and mesa. I'll do the same. If I get my GPu to work at 100% I'll report how I managed to do that, but I don't think we have the same problem.

Otherwise - I'm out of guesses and answers here. :(

----------

## gr3m1in

However, despite it has been running "full throttle", it's actual performance is lower than expected, meaning as it has already been had previously (at least).

It looks like not all the expected features (which features?) are being used and so it does it's best without using those.

Can you please advise an exact way of debugging increasing you are going to do?

I would like to compare the results of the attempts performed in the same way.

And also thanks for not leaving me alone here! ))

----------

## Zucca

I remember I had to enable AMD Powerplay to be able to get proper performance... However the kernel configuration option is completely missing now. Previously I found it conflicting with another AMDGPU option.

There seems to be no debugging option for radeon or amdgpu. :\

I mean in here:

```
no_wb:Disable AGP writeback for scratch registers (int) 

modeset:Disable/Enable modesetting (int) 

dynclks:Disable/Enable dynamic clocks (int) 

r4xx_atom:Enable ATOMBIOS modesetting for R4xx (int) 

vramlimit:Restrict VRAM for testing, in megabytes (int) 

agpmode:AGP Mode (-1 == PCI) (int) 

gartsize:Size of PCIE/IGP gart to setup in megabytes (32, 64, etc., -1 = auto) (int) 

benchmark:Run benchmark (int) 

test:Run tests (int) 

connector_table:Force connector table (int) 

tv:TV enable (0 = disable) (int) 

audio:Audio enable (-1 = auto, 0 = disable, 1 = enable) (int) 

disp_priority:Display Priority (0 = auto, 1 = normal, 2 = high) (int) 

hw_i2c:hw i2c engine enable (0 = disable) (int) 

pcie_gen2:PCIE Gen2 mode (-1 = auto, 0 = disable, 1 = enable) (int) 

msi:MSI support (1 = enable, 0 = disable, -1 = auto) (int) 

lockup_timeout:GPU lockup timeout in ms (default 10000 = 10 seconds, 0 = disable) (int) 

fastfb:Direct FB access for IGP chips (0 = disable, 1 = enable) (int) 

dpm:DPM support (1 = enable, 0 = disable, -1 = auto) (int) 

aspm:ASPM support (1 = enable, 0 = disable, -1 = auto) (int) 

runpm:PX runtime pm (1 = force enable, 0 = disable, -1 = PX only default) (int) 

hard_reset:PCI config reset (1 = force enable, 0 = disable (default)) (int) 

vm_size:VM address space size in gigabytes (default 4GB) (int) 

vm_block_size:VM page table size in bits (default depending on vm_size) (int) 

deep_color:Deep Color support (1 = enable, 0 = disable (default)) (int) 

use_pflipirq:Pflip irqs for pageflip completion (0 = disable, 1 = as fallback, 2 = exclusive (default)) (int) 

bapm:BAPM support (1 = enable, 0 = disable, -1 = auto) (int) 

backlight:backlight support (1 = enable, 0 = disable, -1 = auto) (int) 

auxch:Use native auxch experimental support (1 = enable, 0 = disable, -1 = auto) (int) 

mst:DisplayPort MST experimental support (1 = enable, 0 = disable) (int) 

uvd:uvd enable/disable uvd support (1 = enable, 0 = disable) (int) 

vce:vce enable/disable vce support (1 = enable, 0 = disable) (int)
```

Most of these optios are something that I'm not familiar with.

I guess you could increase kernel logging level for example by putting "loglevel=7" on kernel command line and then see what happens.

Again, I'll inform you as well if I find something that works here. ;)

----------

## gr3m1in

After today's update, KDE's effects came back and now work smoothly with OpenGL 3.1 setting.

However, no other changes were noticed, so maybe this side of the problem was related to KDE-Plasma itself...

Current versions are:

```
sys-kernel/gentoo-sources-4.12.2 (-build -experimental -symlink)

media-libs/mesa-17.1.5 (classic d3d9 dri3 egl gallium gbm gles1 gles2 llvm nptl opencl openmax osmesa vaapi vdpau vulkan wayland xa xvmc -bindist -debug -pax_kernel -pic -selinux -unwind -valgrind ABI_MIPS="-n32 -n64 -o32" ABI_PPC="-32 -64" ABI_S390="-32 -64" ABI_X86="32 64 -x32" VIDEO_CARDS="radeon radeonsi -freedreno -i915 -i965 -imx -intel -nouveau -r100 -r200 -r300 -r600 -vc4 -vivante -vmware")
```

Graphics driver in use is amdgpu with

```
CONFIG_DRM_AMDGPU_SI=y

CONFIG_DRM_AMDGPU_CIK=y

CONFIG_DRM_AMDGPU_USERPTR=y
```

and GRUB's kernel parameters

```
amdgpu.exp_hw_support=1 amdgpu.powerplay=1 amdgpu.i2c_hw=0
```

And that's all so far...

Any further suggestions are very welcome!

----------

## Zucca

Hi. Just exactly yesterday I managed to get full usage out of my GPU.

I hope these help you as well...

Here's what I did:

 add amdgpu.dpm=1 on kernel command line (1)

 reboot

 

```
echo performance > /sys/class/drm/card0/device/power_dpm_state
```

 (2)

 

```
echo high > /sys/class/drm/card0/device/power_dpm_force_performance_level
```

 (3)

 Run some program with vblank_mode=0, and see GPU utilization close to 100% (4)

(1) This should default to 1 on my R9 Nano, but it seemed to have effect anyways. O.o Also replace amdgpu with radeon if using radeon driver.

(2) This was default already on my setup but just in case...

(3) This setting is intended to be used when debugging/testing. It may be possible that only (2) is needed.

(4) glxgears doesn't show the performance gain, but running supertuxkart did.

There you go. I haven't had time to do much testing but this looks promising.

----------

## gr3m1in

Hi Zucca!

Thanks for that, it brought some performance improvements indeed!

It is still lower than expected, however nearly same with radeon and amdgpu drivers, which looks strange to me.

The only explanation I can imagine is that there is something common for both cases is broken and affects the performance using both drivers.

So now I'm going to play around Mesa's and it's dependencies' versions and USE-flags, trying to arrange some magic combination.

----------

## Zucca

I believe that the problem isn't in the driver itself. When I'm using Mandelbulber (v2) with OpenCL enabled I get 100% GPU usage.

I'm thinking it might be mesa that's the guilty here.

----------

