Integrating Crashpad with Qt on Linux - linux

I am trying to integrate Crashpad into Qt application on Linux. I am using Bugsplat database for testing and I followed this tutorial and managed to build this "dummy" application, which should serve as an example of using Qt with Crashpad.
I have made minor adjustments of files to fix build for my Linux platform, primarily making change of version easier and fixed creating directory & crashpad files next to application binaries.
All of the changes are listed below as a diff file:
diff --git a/Crashpad/Tools/Linux/symbols.sh b/Crashpad/Tools/Linux/symbols.sh
index 095f295..b065438 100644
--- a/Crashpad/Tools/Linux/symbols.sh
+++ b/Crashpad/Tools/Linux/symbols.sh
## -3,6 +3,6 ## symupload="${1}/Crashpad/Tools/Linux/symupload"
app="${2}/${4}.debug"
sym="${4}.sym"
url="https://${3}.bugsplat.com/post/bp/symbol/breakpadsymbols.php?appName=${4}&appVer=${5}"
-
+echo ${url}
eval "${dump_syms} ${app} > ${sym}"
eval $"${symupload} \"${sym}\" \"${url}\""
diff --git a/main.cpp b/main.cpp
index db97dd4..b721dc5 100644
--- a/main.cpp
+++ b/main.cpp
## -26,7 +26,7 ## int main(int argc, char *argv[])
{
QString dbName = "Fred";
QString appName = "myQtCrasher";
- QString appVersion = "1.0";
+ QString appVersion = QString::number(MAJOR_VERSION) + "." + QString::number(MINOR_VERSION);
initializeCrashpad(dbName, appName, appVersion);
diff --git a/myQtCrasher.pro b/myQtCrasher.pro
index 3005e41..3bf7a3e 100644
--- a/myQtCrasher.pro
+++ b/myQtCrasher.pro
## -15,6 +15,12 ## DEFINES += QT_DEPRECATED_WARNINGS
# You can also select to disable deprecated APIs only up to a certain version of Qt.
#DEFINES += QT_DISABLE_DEPRECATED_BEFORE=0x060000 # disables all the APIs deprecated before Qt 6.0.0
+MAJOR_VERSION = 4
+MINOR_VERSION = 9
+
+DEFINES += MAJOR_VERSION=$$MAJOR_VERSION
+DEFINES += MINOR_VERSION=$$MINOR_VERSION
+
SOURCES += \
main.cpp \
mainwindow.cpp \
## -94,7 +100,8 ## linux {
LIBS += -L$$PWD/Crashpad/Libraries/Linux/ -lbase
# Copy crashpad_handler to build directory and run dump_syms and symupload
- QMAKE_POST_LINK += "cp $$PWD/Crashpad/Bin/Linux/crashpad_handler $$OUT_PWD/crashpad"
- QMAKE_POST_LINK += "&& bash $$PWD/Crashpad/Tools/Linux/symbols.sh $$PWD $$OUT_PWD fred myQtCrasher 1.0 > $$PWD/Crashpad/Tools/Linux/symbols.out 2>&1"
- QMAKE_POST_LINK += "&& cp $$PWD/Crashpad/attachment.txt $$OUT_PWD/attachment.txt"
+ QMAKE_POST_LINK += "mkdir $$OUT_PWD/crashpad"
+ QMAKE_POST_LINK += "&& cp $$PWD/Crashpad/Bin/Linux/crashpad_handler $$OUT_PWD/crashpad"
+ QMAKE_POST_LINK += "&& bash $$PWD/Crashpad/Tools/Linux/symbols.sh $$PWD $$OUT_PWD fred myQtCrasher $$MAJOR_VERSION"."$$MINOR_VERSION > $$PWD/Crashpad/Tools/Linux/symbols.out 2>&1"
+# QMAKE_POST_LINK += "&& cp $$PWD/Crashpad/attachment.txt $$OUT_PWD/attachment.txt" #if any attachment is needed
}
Build generates both myQtCrasher.debug, and externaly generated myQtCrasher.sym symbols file.
Using their dummy database (the creditals are fred#bugsplat.com and Flintstone as a password), I have managed to report crash, but I for some reason, the bug do not contain uploaded symbols. I have tried to manualy upload them using dump_syms and then symupload applications by sending request onto https://fred.bugsplat.com/post/bp/symbol/breakpadsymbols.php?appName=myQtCrasher&appVer=4.9, but without success.
The symupload application output is
Failed to open curl lib from binary, use libcurl.so instead
Successfully sent the symbol file.
How can I properly upload *.sym and view stack trace on crash?
Thanks for your help!

We were able to get the symbols to resolve for this crash report. Right after the symupload warning Failed to open curl lib from binary, use libcurl.so instead it says successfully sent the symbol file. I confirmed the symbol file was uploaded correctly.
I found 2 issues with the symbol file. When minidump_stackwalk was looking for the corresponding symbols it was looking for:
/myQtCrasher-4.9/myQtCrasher/C03D64A46AB29A093459A592482836E50/myQtCrasher.sym
The file that was uploaded to BugSplat was myQtCrasher.debug.sym and the module on the first line of the sym file was myQtCrasher.debug. I changed the file name to myQtCrasher.sym and the module name to myQtCrasher and the symbols for the myQtCrasher stack frames displayed function names and line numbers.
I'm not sure if these issues with mismatched symbols were due to your script changes but it seems like our script attempts to set the following variables:
app="${2}/${4}.debug"
sym="${4}.sym"
Therefore the script expects the user to generate sym files from the .debug file, but name them based on the corresponding executable.

Related

Yocto do_package: Didn't find service unit specified in SYSTEMD_SERVICE_

Description
I want to install a service into my image, but it is failing with following errors
ERROR: mypackage-git-r0 do_package: Didn't find service unit 'mypackage.service', specified in SYSTEMD_SERVICE_mypackage.
ERROR: Logfile of failure stored in: <build-location>/poky/build/tmp/work/cortexa53-poky-linux/mypackage/git-r0/temp/log.do_package.7924
ERROR: Task (<layer-location>/meta-mypackage-oe/recipes-mypackage/mypackage/mypackage_git.bb:do_package) failed with exit code '1'
Recipe
Python sources are cloned from git, now I want to create a service to run at boot. Here is the recipe
SUMMARY = " "
DESCRIPTION = " "
HOMEPAGE = " todo "
LICENSE = "CLOSED"
SRC_URI += "<URL>"
SRC_URI += "file://mypackage.service"
SRCREV = "<srcrev>"
S = "${WORKDIR}/git"
inherit setuptools3 systemd
RDEPENDS_${PN} = " \
${PYTHON_PN}-pyserial \
${PYTHON_PN}-pyusb \
${PYTHON_PN}-terminal \
"
SYSTEMD_PACKAGES = "${PN}"
do_install_append () {
install -d ${D}${system_unitdir}
install -m 0755 ${WORKDIR}/mypackage.service ${D}{system_unitdir}
}
SYSTEMD_SERVICE_${PN} = "mypackage.service"
FILES_${PN} += "${system_unitdir}/mypackage.service"
Recipe structure
recipes-mypackage/mypackage/
├── mypackage
│   └── mypackage.service
└── mypackage_git.bb
1 directory, 2 files
Service file
NOTE: mypackage has feature to run as daemon using -d option
[Unit]
Description=mypackage service
[Service]
Type=simple
ExecStart=/usr/bin/mypackage -d
[Install]
WantedBy=multi-user.target
Build configurations
Image recipe inherits core-image-base and contains
IMAGE_FEATURES += "package-management"
PACKAGE_CLASSES ?= "rpm deb package_deb"
DISTRO_FEATURES_append = " systemd"
VIRTUAL-RUNTIME_init_manager += "systemd"
inherit extrausers
local.conf contents
MACHINE = "raspberrypi3-64"
ENABLE_UART = "1"
RPI_USE_U_BOOT = "1"
GPU_FREQ = "250"
I might have messed up a lot of things in the recipe, so need some pointer to clean up the recipe as well as resolve the issue.
Thanks.
Replace system_unitdir by systemd_system_unitdir.
SYSTEMD_PACKAGES already contains ${PN} so you can ignore it, same for FILES_${PN} += "${systemd_system_unitdir}/mypackage.service" as if systemd.bbclass finds your unit, it'll be added to the appropriate FILES_ automatically.
c.f. https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/classes/systemd.bbclass#n4
https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/classes/systemd.bbclass#n109
https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/classes/systemd.bbclass#n148
And for completeness, thanks to #Jussi Kukkonen for the comment, missing $ sign before {systemd_system_unitdir}.

Yocto:Execute shell script inside my source repository environment (inside downloads folder ?)

in Yocto I need to execute a bash script that depends on the source repository environment. My current approach is to execute the script directly in the downloads folder.
Pseudocode (hope you know what I mean):
do_patch_append () {
os.system(' \
pushd ${DL_DIR}/svn/*/svn/myapp/Src; \
dos2unix ./VersionBuild/MakeVersionList.sh; \
./VersionBuild/MakeVersionList.sh ${S}/Src/VersionList.h; \
popd \
')
}
Before I start fiddling with the details: Is this a valid approach ? And is do_patch the right place ?
Reason: The script MakeVersionList.sh generates a header file containing revision info from subversion. It executes "svn info" on several folders and parses its output into the header file VersionList.h, which will later be accessed in ${S} during compilation.
Thanks a lot.
Thanks for giving helpful advice. Here's how I finally did it. To minimize dependencies to server infrastructure (SVN server) I put the version generation into do_fetch. I had to do a manual do_unpack as well, because adding the generated file to SRC_URI (like ${DL_DIR}/svn/...) made the fetcher try to fetch it.
#
# Build bitstone application
#
SUMMARY = "Bitstone mega wallet"
SECTION = "Rockefeller"
LICENSE = "GPLv2"
LIC_FILES_CHKSUM = "file://COPYING;md5=bbea815ee2795b2f4230826c0c6b8814"
#Register for root file system aggregation
APP_NAME = "bst-wallet"
FILES_${PN} += "${bindir}/${APP_NAME}"
PV = "${SRCPV}"
SRCREV = "${AUTOREV}"
SVN_SRV = "flints-svn"
SRC_URI = "svn://${SVN_SRV}/svn/${APP_NAME};module=Src;protocol=http;user=fred;pswd=pebbles "
S = "${WORKDIR}/Src"
GEN_FILE = "VersionList.h"
do_fetch_append () {
os.system(' \
cd ./svn/{0}/svn/{1}/Src; \
TZ=Pacific/Yap ./scripts/MakeVersionList.sh {2}; \
'.format(d.getVar("SVN_SRV"), d.getVar("APP_NAME"), d.getVar("GEN_FILE"))
)
}
do_unpack_append () {
os.system(' \
cp {3}/svn/{0}/svn/{1}/Src/{2} {4} \
'.format(d.getVar("SVN_SRV"), d.getVar("APP_NAME"), d.getVar("GEN_FILE"), \
d.getVar("DL_DIR"), d.getVar("S")) \
)
}
do_install_append () {
install -d ${D}${bindir}
install -m 0755 ${APP_NAME} ${D}${bindir}/
}

Nodejs node binary core dumped(Ilegal Insatruction)

I am working on bit-bake environment. I am using nodejs ver 10.15.3
dest cpu == ppc64 linux
My problem is node binary core dumps and I am not able to identify the root cause. I am trying to compile nodejs for dest cpu(ppc64).
I am not sure but I guess there are runtime requirements which are not satisfied on the target machine.
below is my recipe:-
DESCRIPTION = "nodeJS Evented I/O for V8 JavaScript"
HOMEPAGE = "http://nodejs.org"
LICENSE = "MIT & BSD & Artistic-2.0"
LIC_FILES_CHKSUM = "file://LICENSE;md5=9ceeba79eb2ea1067b7b3ed16fff8bab"
DEPENDS = "openssl zlib icu"
DEPENDS_append_class-target = " nodejs-native"
inherit pkgconfig
COMPATIBLE_MACHINE_armv4 = "(!.*armv4).*"
COMPATIBLE_MACHINE_armv5 = "(!.*armv5).*"
COMPATIBLE_MACHINE_mips64 = "(!.*mips64).*"
SRC_URI = "http://nodejs.org/dist/v${PV}/node-v${PV}.tar.xz \
file://0001-Disable-running-gyp-files-for-bundled-deps.patch \
file://0003-Crypto-reduce-memory-usage-of-SignFinal.patch \
file://0004-Make-compatibility-with-gcc-4.8.patch \
file://0005-Link-atomic-library.patch \
file://0006-Use-target-ldflags.patch \
"
SRC_URI_append_class-target = " \
file://0002-Using-native-torque.patch \
"
SRC_URI[md5sum] = "d76210a6ae1ea73d10254947684836fb"
SRC_URI[sha256sum] = "4e22d926f054150002055474e452ed6cbb85860aa7dc5422213a2002ed9791d5"
S = "${WORKDIR}/node-v${PV}"
# v8 errors out if you have set CCACHE
CCACHE = ""
def map_nodejs_arch(a, d):
import re
if re.match('i.86$', a): return 'ia32'
elif re.match('x86_64$', a): return 'x64'
elif re.match('aarch64$', a): return 'arm64'
elif re.match('(powerpc64|ppc64le)$', a): return 'ppc64'
elif re.match('powerpc$', a): return 'ppc'
return a
ARCHFLAGS_arm = "${#bb.utils.contains('TUNE_FEATURES', 'callconvention-hard', '--with-arm-float-abi=hard', '--with-arm-float-abi=softfp', d)} \
${#bb.utils.contains('TUNE_FEATURES', 'neon', '--with-arm-fpu=neon', \
bb.utils.contains('TUNE_FEATURES', 'vfpv3d16', '--with-arm-fpu=vfpv3-d16', \
bb.utils.contains('TUNE_FEATURES', 'vfpv3', '--with-arm-fpu=vfpv3', \
'--with-arm-fpu=vfp', d), d), d)}"
GYP_DEFINES_append_mipsel = " mips_arch_variant='r1' "
ARCHFLAGS ?= ""
# Node is way too cool to use proper autotools, so we install two wrappers to forcefully inject proper arch cflags to workaround gypi
do_configure () {
rm -rf ${S}/deps/openssl
export LD="${CXX}"
GYP_DEFINES="${GYP_DEFINES}" export GYP_DEFINES
# $TARGET_ARCH settings don't match --dest-cpu settings
./configure --prefix=${prefix} --with-intl=system-icu --without-snapshot --shared-openssl --shared-zlib \
--dest-cpu="${#map_nodejs_arch(d.getVar('TARGET_ARCH'), d)}" \
--dest-os=linux \
${ARCHFLAGS}
}
do_compile () {
export LD="${CXX}"
oe_runmake BUILDTYPE=Release
}
do_install () {
oe_runmake install DESTDIR=${D}
}
do_install_append_class-native() {
# use node from PATH instead of absolute path to sysroot
# node-v0.10.25/tools/install.py is using:
# shebang = os.path.join(node_prefix, 'bin/node')
# update_shebang(link_path, shebang)
# and node_prefix can be very long path to bindir in native sysroot and
# when it exceeds 128 character shebang limit it's stripped to incorrect path
# and npm fails to execute like in this case with 133 characters show in log.do_install:
# updating shebang of /home/jenkins/workspace/build-webos-nightly/device/qemux86/label/open-webos-builder/BUILD-qemux86/work/x86_64-linux/nodejs-native/0.10.15-r0/image/home/jenkins/workspace/build-webos-nightly/device/qemux86/label/open-webos-builder/BUILD-qemux86/sysroots/x86_64-linux/usr/bin/npm to /home/jenkins/workspace/build-webos-nightly/device/qemux86/label/open-webos-builder/BUILD-qemux86/sysroots/x86_64-linux/usr/bin/node
# /usr/bin/npm is symlink to /usr/lib/node_modules/npm/bin/npm-cli.js
# use sed on npm-cli.js because otherwise symlink is replaced with normal file and
# npm-cli.js continues to use old shebang
sed "1s^.*^#\!/usr/bin/env node^g" -i ${D}${exec_prefix}/lib/node_modules/npm/bin/npm-cli.js
# Install the native torque to provide it within sysroot for the target compilation
install -d ${D}${bindir}
install -m 0755 ${S}/out/Release/torque ${D}${bindir}/torque
}
do_install_append_class-target() {
sed "1s^.*^#\!${bindir}/env node^g" -i ${D}${exec_prefix}/lib/node_modules/npm/bin/npm-cli.js
}
PACKAGES =+ "${PN}-npm"
FILES_${PN}-npm = "${exec_prefix}/lib/node_modules ${bindir}/npm ${bindir}/npx"
RDEPENDS_${PN}-npm = "bash python-shell python-datetime python-subprocess python-textutils \
python-compiler python-misc python-multiprocessing"
PACKAGES =+ "${PN}-systemtap"
FILES_${PN}-systemtap = "${datadir}/systemtap"
BBCLASSEXTEND = "native"
I am able to apply gdb to node binary below is the snapshot. It core dumps at this point.
Thread 1 "node" hit Breakpoint 10, v8::internal::Runtime_PromiseHookInit (args_length=2, args_object=0x3fffffffd188, isolate=0x11284ab0)
at /usr/src/debug/nodejs/8.17.0-r0/node-v8.17.0/deps/v8/src/runtime/runtime-promise.cc:132
132 /usr/src/debug/nodejs/8.17.0-r0/node-v8.17.0/deps/v8/src/runtime/runtime-promise.cc: No such file or directory.
(gdb) bt
#0 v8::internal::Runtime_PromiseHookInit (args_length=2, args_object=0x3fffffffd188, isolate=0x11284ab0) at /usr/src/debug/nodejs/8.17.0-r0/node-v8.17.0/deps/v8/src/runtime/runtime-promise.cc:132
#1 0x000003c7b3f04134 in ?? ()
(gdb) c
Continuing.
Nodejs is not supported on PPC64 LE architecture. There is only support for the Big Endian platform on PPC architecture till 7.9 Version.

how to remove vendor name from crosstool-ng toolchain name

How do I configure crosstool-ng to drop the vendor name from the generated toolchain name.
For example to create an arm cross toolchain without specifying a vendor part would result in the following naming output
arm-unknown-linux-gnueabihf-g++
If I had supplied a vendor for instance "linaro" then I would have an output such as
arm-linaro-linux-gnueabihf-g++
What I want is to make the crosstool-ng to output the name as follows
arm-linux-gnueabihf-g++
I am aware that you can use the "Tuple's sed transform" and the "Tuple's alias" these facilities from menuconfig but these only create symbolic links to the arm-unknown-gnueabihf-g++ etc.
I have a toolchain that came with a board I am playing with and these toolchain have the vendor's part omitted. So my question "How do they do that?"
Even if the documentation states that:
CT_TARGET_VENDOR: [...] It can be set to empty, to remove the vendor string from the target tuple.
(see http://crosstool-ng.github.io/docs/configuration/ )
The current behavior is to fall back to 'unknown' if no value for CT_TARGET_VENDOR is given.
This situation was discussed within the crosstool-ng mailing list back in 2011 and there was a patch provided with a solution which may help you.
The idea of the patch was to:
[...] supplies a fake vendor and
then strips it out afterwards.
within scripts/functions of the crosstool-ng source.
See: https://sourceware.org/ml/crossgcc/2011-10/msg00047.html
diff -r a31d097e28cd -r 5b1330e7264a scripts/functions
--- a/scripts/functions Wed Oct 19 15:27:32 2011 +1300
+++ b/scripts/functions Wed Oct 19 16:23:36 2011 +1300
## -944,6 +944,20 ##
fi
}
+# Computes the target tuple from the configuration and the supplied
+# vendor string
+CT_BuildOneTargetTuple() {
+ local vendor="${1}"
+ local target
+
+ target="${CT_TARGET_ARCH}"
+ target="${target}${vendor:+-${vendor}}"
+ target="${target}${CT_TARGET_KERNEL:+-${CT_TARGET_KERNEL}}"
+ target="${target}${CT_TARGET_SYS:+-${CT_TARGET_SYS}}"
+
+ echo "${target}"
+}
+
# Compute the target tuple from what is provided by the user
# Usage: CT_DoBuildTargetTuple
# In fact this function takes the environment variables to build the target
## -994,10 +1008,7 ##
CT_DoKernelTupleValues
# Finish the target tuple construction
- CT_TARGET="${CT_TARGET_ARCH}"
- CT_TARGET="${CT_TARGET}${CT_TARGET_VENDOR:+-${CT_TARGET_VENDOR}}"
- CT_TARGET="${CT_TARGET}${CT_TARGET_KERNEL:+-${CT_TARGET_KERNEL}}"
- CT_TARGET="${CT_TARGET}${CT_TARGET_SYS:+-${CT_TARGET_SYS}}"
+ CT_TARGET=$(CT_BuildOneTargetTuple "${CT_TARGET_VENDOR}")
# Sanity checks
__sed_alias=""
## -1012,7 +1023,14 ##
esac
# Canonicalise it
- CT_TARGET=$(CT_DoConfigSub "${CT_TARGET}")
+ if [ -n "${CT_TARGET_VENDOR}" ]; then
+ CT_TARGET=$(CT_DoConfigSub "${CT_TARGET}")
+ else
+ # Canonicalise with a fake vendor string then strip it out
+ local target=$(CT_BuildOneTargetTuple "CT_INVALID")
+ CT_TARGET=$(CT_DoConfigSub "${target}" |sed -r -s s:CT_INVALID-::)
+ fi
+
# Prepare the target CFLAGS
CT_ARCH_TARGET_CFLAGS="${CT_ARCH_TARGET_CFLAGS} ${CT_ARCH_ENDIAN_CFLAG}"
CT_ARCH_TARGET_CFLAGS="${CT_ARCH_TARGET_CFLAGS} ${CT_ARCH_ARCH_CFLAG}"

ERROR: Patch can't be applied/reverted successfully

I've gotten
PATCH_SUPEE-5344_CE_1.8.0.0_v1-2015-02-10-08-10-38.sh
to install properly, but when I try to install
PATCH_SUPEE-1533_EE_1.13.x_v1-2015-02-10-08-18-32.sh
I get the error,
Checking if patch can be applied/reverted successfully...
ERROR: Patch can't be applied/reverted successfully.
patching file app/code/core/Mage/Adminhtml/Block/Dashboard/Graph.php
Reversed (or previously applied) patch detected! Assume -R? [n]
Apply anyway? [n]
Skipping patch.
1 out of 1 hunk ignored -- saving rejects to file app/code/core/Mage/Adminhtml/Block/Dashboard/Graph.php.rej
patching file app/code/core/Mage/Adminhtml/controllers/DashboardController.php
Reversed (or previously applied) patch detected! Assume -R? [n]
Apply anyway? [n]
Skipping patch.
1 out of 1 hunk ignored -- saving rejects to file app/code/core/Mage/Adminhtml/controllers/DashboardController.php.rej
I'm using this command,
sh PATCH_SUPEE-1533_EE_1.13.x_v1-2015-02-10-08-18-32.sh
What do I need to do differently? This is super frustrating; any help is much appreciated!
This is what I did: I noticed it ask for confirmation "Assume -R? [n]" so I added the -R to confirm the action like this.
sh PATCH_SUPEE-5344_CE_1.8.0.0_v1-2015-02-10-08-10-38.sh -R
sh PATCH_SUPEE-1533_EE_1.13.x_v1-2015-02-10-08-18-32.sh -R
And they both came out successfully.
Checking if patch can be applied/reverted successfully...
Patch was applied/reverted successfully.
I hope this helps.
Which version you are using , because
PATCH_SUPEE-1533_EE_1.13.x_v1-2015-02-10-08-18-32.sh patch is already there in Magento 1.9.0.1
One can manually check these files also , with (-) sign - in file they have remove the code and with (+) - they have added new code
__PATCHFILE_FOLLOWS__
diff --git app/code/core/Mage/Adminhtml/Block/Dashboard/Graph.php app/code/core/Mage/Adminhtml/Block/Dashboard/Graph.php
index c698108..6e256bb 100644
--- app/code/core/Mage/Adminhtml/Block/Dashboard/Graph.php
+++ app/code/core/Mage/Adminhtml/Block/Dashboard/Graph.php
## -444,7 +444,7 ## class Mage_Adminhtml_Block_Dashboard_Graph extends Mage_Adminhtml_Block_Dashboar
}
return self::API_URL . '?' . implode('&', $p);
} else {
- $gaData = urlencode(base64_encode(serialize($params)));
+ $gaData = urlencode(base64_encode(json_encode($params)));
$gaHash = Mage::helper('adminhtml/dashboard_data')->getChartDataHash($gaData);
$params = array('ga' => $gaData, 'h' => $gaHash);
return $this->getUrl('*/*/tunnel', array('_query' => $params));
diff --git app/code/core/Mage/Adminhtml/controllers/DashboardController.php app/code/core/Mage/Adminhtml/controllers/DashboardController.php
index eebb471..f9cb8d2 100644
--- app/code/core/Mage/Adminhtml/controllers/DashboardController.php
+++ app/code/core/Mage/Adminhtml/controllers/DashboardController.php
## -92,7 +92,8 ## class Mage_Adminhtml_DashboardController extends Mage_Adminhtml_Controller_Actio
if ($gaData && $gaHash) {
$newHash = Mage::helper('adminhtml/dashboard_data')->getChartDataHash($gaData);
if ($newHash == $gaHash) {
- if ($params = unserialize(base64_decode(urldecode($gaData)))) {
+ $params = json_decode(base64_decode(urldecode($gaData)), true);
+ if ($params) {
$response = $httpClient->setUri(Mage_Adminhtml_Block_Dashboard_Graph::API_URL)
->setParameterGet($params)
So if the code is already there in files as suggested above , your patch will not work as it's not needed actually.
Are you using Debian ? If so, try this:
./PATCH_SUPEE-1533_EE_1.13.x_v1-2015-02-10-08-18-32.sh
So sh command changed to ./
Don't forget to login as a file owner.

Resources