Automating the download of redhat for kickstart - linux

I'm trying to work on a project to automate the kickstart images however i'm stuck on my first subtask.
The download links for redhat downloads look something like the bellow:
https://access.cdn.redhat.com//content/origin/files/sha256/12/mkwosis89j9f8ef53ad7365f2997d42d4f83ccuwodjsl/rhel-server-7.3-x86_64-dvd.iso?auth=148102836_3974432975fa9f10e716c4a38928db
This becomes a problem because i can't know what the sha and the auth code are going to be before hand i can't just modify this url in bash, i need to have a way of going to the Latest downloads page and follow the link.
Anybody know how what i can use to achieve this?
Thanks.

It seems like you will be wasting a lot of bandwidth for each installation. Have you considered creating a local repository of ISO's? You would need to add the latest ISO when it is released. Check out this link for creating ISO repositories.
https://www.cyberciti.biz/tips/redhat-centos-fedora-linux-setup-repo.html

Related

why download apk file is buffered and gives user old version

We hold our landing page on Azure and it is for users to download an Android apk file. This landing page is a html file. Here is the markup for users to download:
download here
It all works fine until now. Users start to complain that the app they downloaded cannot work properly. But when we tested, it works fine.
Finally we find out that, although the link is
http://www.[mysite].com/android/[MyAndroidApp].apk
but sometimes when user click it, it goes to
http://101.44.1.131/cloud/223.210.55.28/files/9216...636//www.[mysite].com/android/[MyAndroidApp].apk
This is a buffer and holds an old version of our app!
Can anyone tell me why this happen and how can I prevent it buffer our old version?
How often do you update this apk file?
May be a caching issue, but not sure exactly.
Have you tried using Azure storage? Upload the file on there, and then link directly to it.
Should cost you less in the long run and not cause any buffering/cache issues
I would suggest you try to put version numbers after your filename. This is also a good practice for .js files. A problem is very often that it's cached and the cache not updated correctly. It's a general problem in the web.
So. Try to put version numbers after the file name, and let us know if this works.
Thank you all for your suggestions.
We have found the reason. Looking at the redirect url, it is actually some ISPs cached our apk files. They are doing this so that they can save themselves money and bandwidth. This is a common practice in some countries and is well documented.
How evil it is.
Our solution is thus change the file name very time we deploy a new version.

About updating a node-webkit app

I want to set auto-updates up for my apps before I release. I'm a budding programmer, so when I looked into node-webkit-updater I was pretty confused. It seems under-documented to me. Can someone explain the overall update mechanism that it helps implement?
As an alternative to node-webkit-updater, I was thinking of creating my own update system. I kinda like how Apple handles extension updates and I was thinking about replicating it. This would involve putting a JSON/XML manifest file on Amazon S3 along with the latest versions of the app for all platforms. The app checks the file at startup and replaces itself with the new version.
Is the latter sound plausible? Am I better off going with node-webkit-updater? If so, can someone explain it to me please? My app is a Mac + Windows project.
This is what we did:
The first script of the page checks a custom "manifest" (.txt file) on the server, which contains some arbitrary text, e.g. version number.
If this value differs from a local version of the manifest, then download a .zip file from server. (The zip contains the latest nwjs website. You could have a separate one for each platform).
Unzip into a local directory (we use 7za command line util).
Set window.location.href to above local directory (index.html).
I know this is a old question, but here is the answer :)
https://www.npmjs.org/package/node-webkit-updater

How to make my custom live Debian-based? Experiencing some problems

The need
Recently I've started flirting with the idea of making my own customized Debian live distro. My aim is to have an USB stick with Debian, specific packages, custom scripts and files installed inside. In this way, I can take my OS with everything I need to work with without taking my laptop with me. Furthermore, It will be specially useful in case I just wanted to replicate the OS without the hassle of installing every single packages and further customizations over again.
The research
So I decided to go for it and educate myself on the subject. I've found the Linux from scratch project (LFS), but to be honest, it will take me lots of time I currently cannot afford to invest (But seriously thinking for the future).
I decided to use the live-build project scripts based on the instructions and examples of their manual. http://live.debian.net/manual/3.x/html/live-manual.en.html
The problem
So far, I've built a hybrid.iso image with a custom selection of packages by specifying them in the /config/packages-list/mylist.list.chroot.
Then I tried to copy my custom scripts, files and software inside specific folders under the chroot directory just created,
i.e.
mkdir chroot/etc/skel/<custom dir here>
or
cp <some file or script> chroot/usr/local/bin/
and then run
lb build binary
The problem is that the iso doesn't get built after the first time I run lb build and the customizations done on the chroot directory are deleted every time I try to build it again.
I've tried...
lb clean --binary
lb clean --stage
lb build binary
or
lb build binary iso
So what am I missing? How can I add custom files, folders, scripts to my custom live Debian without downloading every single package over again?
why isn't the iso image built again after the first time I run lb build?
Thanks in advance...
P.D: I decided to be very detailed on the writing so anyone could understand, specially those that want to try the same...
I am conscious about LFS too. But, this
My aim is to have an USB stick with Debian, specific packages, custom
scripts and files installed inside.
and this
it will take me lots of time I currently cannot afford to invest
made me pointing to my answer
I have two suggestions. The easy one, use tools like remastersys or live-magic.
Follow this link.
The difficult one, follow the official documentation to how to creat a custom debian cd.
Debian official doc
This answer comes a year late for the original poster, but for future searchers: don't add files directly to the chroot. Instead, make a folder structure in config/includes.chroot. Then your customizations will be retained when you rebuild the image.
See the section "Live/chroot local includes" in the debian-live manual: http://live.debian.net/manual/4.x/html/live-manual.en.html#506

Where can I download CCValidator?

I feel like a bit of a dummy here. I downloaded CCValidator for CruiseControl.NET the other day but can't remember from where.
I'm on another machine now without access to the machine where I downloaded CCValidator, and Googling leads me nowhere except the CCValidator wiki and texts about CCValidator.
Can someone provide the link to download CCValidator?
You don't specify a version...
http://sourceforge.net/projects/ccnet/files/
Pick a version here and download the appropriate CruiseControl.NET-Validator...

Is there a way to run Trac offline?

I'd like to download the Trac database so I can view its tickets offline. Is there anyway to achieve this? I.e. if I need to leave the office and bring my laptop with me, how can I bring the tickets with me without having to connect to the company network?
I know that Mylyn can download and sync tickets via it's trac connector but I'd like some stand-alone viewer.
See Simple Defects (SD).
I particularly like the "One-tweet install" idea.
I’m installing #SD (http://syncwith.us)
after reading about it on #StackOverflow
curl fsck.com/sd|perl;
export $PATH=~/sd/bin:$PATH; sd
Note that you can clone Trac (and other bugtrackers) in SD:
sd clone --from trac:https://trac.parrot.org/parrot
Seeing as you don't want to install a server, how about using RSS? IIRC, Trac let lets you get RSS feeds for each person, so you can have a feed of things assigned to you.
All you need do then is get a nice client that will download these tickets. You should be able to access a plaintext version without internet connection.
If that's not flexible enough, you could write a script on the server to publish a feed using the database directly.
And if RSS isn't for you (and your email is available offline), you could mail reports home. Trac also has this built in.
The default Trac installation uses a combination of SQLite to matintain all of the data. Attachements are stored on the file system.
In the folder containing the trac site, find \db\trac.db
This file can be viewed using the SQLite manager Firefox Addon
Happy hunting.
And if RSS or email isn't your notification of choice, there's a trac plugin that will let you receive task notifications on your Remember The Milk todo list.
See: http://1.www.rememberthemilk.com/forums/ideas/3580/?forum=ideas&hl=bs&topic=3580
If your objective is simply to view the tickets offline, how about
Run a report with all the tickets (or all those you're interested in).
Select either the comma-delimited or tab-delimited download link at the bottom of the page.
Import the downloaded file into Excel.
you could install it on a local machine
You can host the trac locally and set up the connectionstring point to your dowloaded database.
Sure. Install a web server locally, install trac, get it set up the same (or similar) way to the way it is on the live version and then script the server to publish db backups and write a local script to download those and restore them over your database.
It's not simple (installing Trac is a battle on its own from my experience of it) but every element is highly googleable =)
The trac client FatBug (http://fat-bug.com/) listed in
https://trac.edgewall.org/wiki/Clients
seems to do the exact what was described by the OP. I bumped into it after I just checked SD. SD seems trival on Linux, but heavy on Windows, it depends on Perl & CPAN.

Resources