I have been assigned a task to create a environment setup which a piece of java program uses. This piece of java program uses sqlldr to bulk upload the data into database.
Now the client's machine already has Oracle InstantClient 18.5 version configured. To the existing config I have to add sqlldr using the Tools Package (RPM) provided by Oracle oracle-instantclient18.5-tools-18.5.0.0.0-3.x86_64.rpm. Correct? There is a .zip file available as well on the website.
Now I do not know how to do that. I have normally installed software using yum or apt-get. Can anyone please help me.
Here is the existing config which could be of concern, I do not wanna disturb the other setups apart from what I wanna add to it.
ORACLE_HOME= /opt/oracle/instantclient_18_5/
ORACLE_BASE= /opt/oracle/instantclient_18_5/
LD_LIBRARY_PATH = /usr/local/lib/:/opt/oracle/instantclient_18_5/:
Questions:
Now do I need to download this .rpm file onto the server?
If download then download in /tmp folder, then next? Do I need to install anything else? If yes how can I verify if my current setup already do not have that?
Once installed any post configurations need to be done specially for sqlldr to run?
Please help! Thanks!
It looks like you have Instant Client from ZIP files. (If it was instead from RPMs, you would have the files in /usr/lib/oracle/18.5/client64/...).
So simply unzip the Tools ZIP file so its contents are in your /opt/oracle/instantclient_18_5 directory, and make sure this directory is in your PATH.
Because you are using Instant Client you should unset ORACLE_HOME and ORACLE_BASE. The former can cause problems if it is set with Instant Client.
In general, follow the Instant Client installation instructions at the foot of the download page.
Also look at using Instant Client 19, which is a Long Term Support version and will connect to the same databases as Instant Client 18. Notice that there are no new Instant Client 18 "Release Updates" being released (though you could build them yourself, if needed).
Related
I just try to set up Valves Source SDK 2013 for Linux but I need to say that I find the wiki + documentation rather confusing and partly heavily outdated (Windows-only instructions, only for GoldSrc / pre-20XX SDk etc.).
I hope that someone who already has gone through the hassle can supply me with some hints on how to correctly set up the system.
I tried to use some Windows-specific instructions to understand the system but some are highly platform-specific.
So here is the current status (I based what I did on this wiki page: Wiki: Source SDK 2013:
The source of the SDK SDK 2013 from GitHub is cloned to
~/Git/source-sdk-2013/
the SDK Base 2013 installed via Steam and the steam-runtime to
~/working/steam-runtime-sdk_2013-09-05/
I was not sure whether there is a specific path I should put the steam runtime into so I just put it into my self-created working dir.
# Create a Multiplayer sample project
export SDKROOT="~/Git/source-sdk-2013"
bash $SDKROOT/mp/src/creategameprojects
bash $SDKROOT/mp/src/createallprojects
# Setup Steam Runtime
export STEAMRT="~/working/steam-runtime-sdk_2013-09-05/"
cd $STEAMRT
# Choose all build targets (i386 + amd64) and download these
./setup.sh
# Set current to target to the same as host machine (ie. amd64)
./shell.sh
# Compile the actual game
make -f $SDKROOT/mp/src/games.mak
I have not yet touched any source files as there's plenty of sources already supplied. I just wanted to confirm having a working toolchain set up
This all compiles fine but in the end the script wants to chmod the client.so and server.so but claims "not found" – but it sadly does not provide any information where it searched for them. Actually these are existant in
$SDKROOT/mp/game/mod_hl2mp/bin
and even marked as executable (-rwxr-xr-x).
So I just ignored this and hoped for the best. The next line to me sounds a bit strange:
At this point you should have client.so and server.so files to load with the Source SDK Base 2013 of your choice.
So I should be able so load the files with "the Source SDK Base 2013" (of your choice?!? Valve is the only one providing it O.o). How am I supposed to do that? I have not found any hint whatsoever for that, sadly.
But they hint me to the README.txt of the steam-runtime which tells me to do this:
run.sh ./MyGame
But where's the executable? I only have .so's
And this is the point where I currently am. I'm quite confused as I have many questions now:
Why do only the Linux users need to download the steam runtime? What if I want to not ship via Steam?
Is that chmod failure a script failure or a mistake in my directory setup?
How do I load these libraries via the SDK Base
Where is the binary? I'm quite confused here...
Have I overlooked something?
I appreciate any hints or links to resources, maybe explanations when I just were to dumb to understand what they mean :P
EDIT: Actually there is a GitHub repo for the steam-runtime too (GitHub/steam-runtime) – why is the download so outdated, the git repo has some updated stuff going on. Which should one choose?
With the help of a friend I didn't expect to be able to answer this (he didn't know about Linux but we could figure it out) I could solve it faster than expected.
To "load" the game via the Steam SDK Base just append the -game parameter and point it to the directory with the gameinfo.txt (ie. $SDKROOT/mp/game/mod_hl2mp/) in it.
Alternatively just copy the contents of this directory to
~/.local/share/Steam/steamapps/sourcemods/$MYSOURCEMOD
where $MYSOURCEMOD is how you want to call it (do not use spaces). Then add a steam.inf file in that dir with following content:
appID=243750
ProductName=$MYSOURCEMOD
PatchVersion=1.0.0.0
After a restart Steam will be able to find the sourcemod.
I'm not quite sure what the "steam-runtime" thingy is but I suppose it is to set up the build environment (to use a custom gcc etc.) as this is how the scripts look like. I'm not sure why you should run the game via the run.sh in the bin/ subfolder of the runtime instead of via Steam or via the parameter one the Source SDK Base but maybe someone can enlighten me here.
The archive one shall download is only a downloader/configurator for the steam-runtime hosted on GitHub.
I want to set auto-updates up for my apps before I release. I'm a budding programmer, so when I looked into node-webkit-updater I was pretty confused. It seems under-documented to me. Can someone explain the overall update mechanism that it helps implement?
As an alternative to node-webkit-updater, I was thinking of creating my own update system. I kinda like how Apple handles extension updates and I was thinking about replicating it. This would involve putting a JSON/XML manifest file on Amazon S3 along with the latest versions of the app for all platforms. The app checks the file at startup and replaces itself with the new version.
Is the latter sound plausible? Am I better off going with node-webkit-updater? If so, can someone explain it to me please? My app is a Mac + Windows project.
This is what we did:
The first script of the page checks a custom "manifest" (.txt file) on the server, which contains some arbitrary text, e.g. version number.
If this value differs from a local version of the manifest, then download a .zip file from server. (The zip contains the latest nwjs website. You could have a separate one for each platform).
Unzip into a local directory (we use 7za command line util).
Set window.location.href to above local directory (index.html).
I know this is a old question, but here is the answer :)
https://www.npmjs.org/package/node-webkit-updater
I have a standalone server running Cygwin -- I did not setup this server, it was inherited. Anyway, I'd like to know what options the installing admin selected in the setup program.
I've read that I could look in /etc/setup, /etc/postinstall, or /etc/preremove but there are a lot of packages in those directories... same goes for the output of cygcheck -c.
I don't want to know every single library on the system... just how to duplicate the install. Is there a way to determine which packages were select in the GUI setup program?
Thanks!
Cygwin is pretty standalone. You should be able to archive up the entire Cygwin directory (and subdirectories) and move it to the same location on another system.
If you archive it up I recommend 7-zip. You can get it free here. The built in Windows archiver can create permission problems when an archive is extracted on a destination system. I recommend 7-zip for both archiving and unarchiving. If you use the built in Windows archiver and then move it to the new system and extract it - it will extract without errors. However you may find things don't actually work right while using some Cygwin applications
If you don't copy everything you won't move any of the original admin's custom changes.
I'm trying to update our installer so a user can simply double-click on a file and have all the dependencies and our software installed easily. This is a suite of applications that will are deployed on a clean Ubuntu 8.04 (Hardy Heron) installation. I have investigated making a .deb file, but listing the dependencies doesn't work, because there isn't any Internet access available. And, any script that would set up a local APT repository would still need to be run from the command line. Is there a way to put a .deb file inside of a .deb file?
I know many companies ship shell scripts that you have to chmod +x, and then execute. This is not acceptable. It is ridiculous that this isn't possible; especially considering the distribution and architecture is fixed.
If you are totally confident that it will be installed on the same system every time, you can find the list of package dependencies yourself, fetch them from the Ubuntu repositories, and package them up with your software. You just have to be clear that your software is for a specific version, probably deal with things like keeping up with maintenance releases.
You can also easily install with a script. As for your complaint about making scripts executable, well, I don't know how you're shipping your product, but since you say it's going somewhere without Internet access, I assume it's going to be copied from some kind of media. If you make the script executable when you put it on that media, you're done.
If you'd like to do this using packages, you can create a CD-ROM which contains a package repository. You can find all kinds of information on this with Google Search. For starters, try this - it's a GUI for doing it. http://aptoncd.sourceforge.net/
A makeself self-extracting executable that starts the install script using sudo will work.
The user can either run it from a terminal (after chmod-ing it) or can double-click it and tell it to "Run" from the prompt.
It's possible to put deb-files into deb-files. The only thing you need to do is to configure the appropriate scripts.
A .deb-file consists of:
1x control.tar.gz: contains a file "control" (describes the package) and optional files like "postinst" (script executed right after extraction). There are other files you might include, and Google Search should deliver information about the available scripts.
1x data.tar.gz: contains some structure of root-filesystem which contains files/folders that need to be (re-)placed. Additionally, you may configure the behaviour in the mentioned scripts.
1x debian-binary: as far as I remember, this is simply a version number in a file. I don't know exactly what it means; just remember that in most of the cases this is 2.0.
So you now may put your .deb files in the data-package. Those are extracted by your script... and installed using:
# dpkg -i yourpackage1.deb yourpackage2.deb
I'd like to download the Trac database so I can view its tickets offline. Is there anyway to achieve this? I.e. if I need to leave the office and bring my laptop with me, how can I bring the tickets with me without having to connect to the company network?
I know that Mylyn can download and sync tickets via it's trac connector but I'd like some stand-alone viewer.
See Simple Defects (SD).
I particularly like the "One-tweet install" idea.
I’m installing #SD (http://syncwith.us)
after reading about it on #StackOverflow
curl fsck.com/sd|perl;
export $PATH=~/sd/bin:$PATH; sd
Note that you can clone Trac (and other bugtrackers) in SD:
sd clone --from trac:https://trac.parrot.org/parrot
Seeing as you don't want to install a server, how about using RSS? IIRC, Trac let lets you get RSS feeds for each person, so you can have a feed of things assigned to you.
All you need do then is get a nice client that will download these tickets. You should be able to access a plaintext version without internet connection.
If that's not flexible enough, you could write a script on the server to publish a feed using the database directly.
And if RSS isn't for you (and your email is available offline), you could mail reports home. Trac also has this built in.
The default Trac installation uses a combination of SQLite to matintain all of the data. Attachements are stored on the file system.
In the folder containing the trac site, find \db\trac.db
This file can be viewed using the SQLite manager Firefox Addon
Happy hunting.
And if RSS or email isn't your notification of choice, there's a trac plugin that will let you receive task notifications on your Remember The Milk todo list.
See: http://1.www.rememberthemilk.com/forums/ideas/3580/?forum=ideas&hl=bs&topic=3580
If your objective is simply to view the tickets offline, how about
Run a report with all the tickets (or all those you're interested in).
Select either the comma-delimited or tab-delimited download link at the bottom of the page.
Import the downloaded file into Excel.
you could install it on a local machine
You can host the trac locally and set up the connectionstring point to your dowloaded database.
Sure. Install a web server locally, install trac, get it set up the same (or similar) way to the way it is on the live version and then script the server to publish db backups and write a local script to download those and restore them over your database.
It's not simple (installing Trac is a battle on its own from my experience of it) but every element is highly googleable =)
The trac client FatBug (http://fat-bug.com/) listed in
https://trac.edgewall.org/wiki/Clients
seems to do the exact what was described by the OP. I bumped into it after I just checked SD. SD seems trival on Linux, but heavy on Windows, it depends on Perl & CPAN.