Much to my surprise, it is not a five minutes-task at all. Spent almost a day without achieving anything...
My existing website notgoodname.com is running well for months. And I want to change it to goodnewname.com (already bought it) . Ideally, I would like to do it the way that impact users least (least downtime, 0 data lost), SEO n others are second priority.
Initially, I was thinking about having goodnewname.com and notgoodname.com sharing same IP, same hosting ressource (php code, mysql db), then gradually retire notgoodname.com. However, Googling can't help, godaddy support neither, both gave me confusing guides.
Please lend me some guidance/good links.. (my background: ~10 year as software developer but just few months webmaster : cpanel/WHM/hosting/dns kinda stuff)
Firstly, I would like to congratulate you on goodnewname.com, what a find. :P
The easiest hassle free way to use the new domain is to add it as a parked domain under the cPanel account of the existing website, you would also need to change the nameserver to point at WHM's/YourHosts nameservers, or, and a root A record pointing the domain to your web server's IP.
If the website is quite straightforward (PHP, CSS, HTML, no databases etc..) you could instead of adding it as an addon, create another hosting account for the new name, copy all the files across from wwwroot (public_html) on the old account to wwwroot (public_html) on the new account and then delete the old account and add old bad name as an addon.
Related
I was browsing the web to find out if I can I have my own custom extension. Turns out it's a long process, but can I have these extensions : .ai (or) .java (or) .py for my website?
Please explain where and for how much can I buy any of the above mentioned extension.
Refer : https://www.iana.org/domains/root/db
To get a domain under the .ai just need to find a register that support if, for example: https://www.namecheap.com/domains/registration/results.aspx?domain=foo.ai
To register a new TLD you could check these guidelines/faq, here are some links
https://newgtlds.icann.org/en/applicants/global-support/faqs/faqs-en
https://www.icann.org/news/announcement-2-2008-10-23-en
I did a quick research and:
.java is a private TLD owned by Oracle and currently they do not sell domain names with it.
.py domain names are not available, but are available in second level domains like .com.py, .net.py and so on. You can by them from nic.py
.ai domains can be registered for a minimum of two years and the price is $68.88/year. Here is more info: Namecheap
Getting the .AI domain through their main registrar can be a tedious process, so I would rather recommend Namecheap, Temok or EuroDns as they will handle the registration faster and save you time. For more detailed information check AI-domain-FAQ.
Lanexbg already gave enough information about .java and .py and I have nothing to add there.
i have multiple sites (30) on my VPS with the same template into the /wp-content/themes/ directory and when i have to update the theme i have to do the operation on thirty folders.
It's possible to link with a Symbolic Lynk to the folder of the theme into the /wp-content/themes/ directories?
I want to do something like this if it's possible:
/var/www/<theme_folder>/ -> /home/<user>/public_html/wp-content/themes/<link_theme_folder>
Can wordpress recognize the folder with a "special" Sym Link?
Thanks for your help and sorry for my bad English.
Have a nice day.
I tried locally and WordPress 4.x did not detect the symlinked folder, so I guess it's a no-go.
What you should consider is migrate your 30 WP installations into one multisite (or "network") configuration.
This will allow you to centralize plugins and themes for all your websites in one interface to rule them all.
You will need to organize the migration:
- setup a new WP, configure it as multisite.
- all plugins used by all of your 30 sites should be imported into this Wordpress, along with their configurations (I'd do that manually);
- Add your Theme to the themes folder;
- recreate the users;
- export the posts from each site as an xml file;
- import each xml file to its related new blog;
- You will need a Domain mapping plugin so that each blog has its own Domain name. (site1.com, site2.com instead of site.com/site1/ urls).
Tips:
Tell your customers that you need to "freeze" their websites for the necessary time (meaning: no more touching the CMS, adding posts, changing configurations).
Practice on a local copy first to play around first.
Work on another, separate, domain name during the setup. When the sites are all properly replicated, update the DNS registry of the 30 sites to point to the new multisite WP. This way, no downtime!
This can take some time but will make it much easier for management and for adding new sites in the future.
I have linked a custom domain name through GoDaddy to my Blogger blog. It all works fine. For example, my domain name is (example, not the real site) www.myblog.com. My blogger site is (again, example, not the real site) www.myblog.blogspot.com.
It all looks the same on the home page. The issue comes when I click on a specific post. For instance the post "Great Cake" under the blogger address shows up www.myblog.blogspot.com/greatcake. However, the domain name (the one I want to use) still only shows up as www.myblog.com, no matter what page I am on. Because of that, I cannot share links for individual pages, only for the whole blog itself.
I hope that made sense. I'm looking for a way to have the domain name (that is forwarded with masking through GoDaddy) also reflect individual posts and pages with an individual web address.
Can anyone explain how I could do that? Because I'm not sure how to word the question, I'm having no luck trying to find an answer on the internet, though I find many blogs that do what I want mine to do and are also powered by Blogger. Please help!
It sounds as if you have done the basic GoDaddy setup (I'm using Namecheap), but you may not have completed the blogger setup. Instructions are here https://support.google.com/blogger/troubleshooter/1233381?hl=en#ts=1734115
New Blogger no longer provides custom domain. Read how to set up the custom domain in 30 just minutes, if that is the case link
I am a new developer (as in just graduated on the 10th) and was hired by a company to do web development. I was asked to do some minor changes to a site that this company acquired. The problem is that we do not have access to the source code (apparently the people had a bad break up with their previous developers and cannot get the source, I'm not exactly sure). Is there a way I can add links to a site and have it change live? I have Visual Studios, the address, the links, and the videos they will go to, not a hard fix, but I don't know how to edit the site without the source code. Any suggestions? Thanks in advance!
I advise you to talk to a senior or superior and get more information on how to proceed, because getting that code in a less than professional (or legal) way (e.g. using website rippers or something) would be a bad career move ;)
good luck.
interesting situation I should say, the company definetely didnt do its homework before the break-up
I am presuming you answer "yes" for the questions below
Is your company the legal owner of this website?
can you change the name servers or CNames etc
The current website is not Flash or silverlight
if here - you have said "yes" for all the above.
First of all navigate to every page of this website. File save as
each of this page to html(make sure you choose webpage complete -
this will save all the images as well) I realise this will be static, but there is not much you can do here
Get all resources (stylesheets, xsds (if any) , any other images)
Enrich this content based on requirements (i.e. add dynamic content, change logos etc)
Modify the cname or nameserver to point to the location(webserver)
you are in control.
Deploy your enriched and tested code
Educate your company to treat the developers well and when things go wrong, ensure transition is done well
I hope this help and good luck
Krishna
I'm trying to get crawl to work on two separate farms I have but can't get it to work on either one. They both have two WFE's with an additional WFE configured as an Index server. There is one more server dedicated for Query and two clustered SQL 2005 back end servers for the database. I have unsuccessfully tried at least 50 different websites that I found with solutions from a search engine. I have configured (extended) my Web App to use http://servername:12345 as the default zone and http://abc.companyname.com as the custom and intranet zones. When I enter each of those into the content source and then try to run a crawl, I get a couple of errors in the crawl log:
http://servername:12345 returns:
"Could not connect to the server. Please make sure the site is accessible."
http://abc.companyname.com returns:
"Deleted by the gatherer. (The start address or content source that contained this item was deleted and hence this item was deleted.)"
However, I can click both URL's and the page is accessible.
Any ideas?
More info:
I wiped the slate clean, so to speak, and ran another crawl to provide an updated sample.
My content sources are as such:
http://servername:33333
http://sharepoint.portal.fake.com
sps3://servername:33333
My current crawl log errors are:
sps3://servername:33333
Error in PortalCrawl Web Service.
http://servername:33333/mysites
Content for this URL is excluded by the server because a no-index attribute.
http://servername:33333/mysites
Crawled
sts3://servername:33333/contentdbid={62a647a...
Crawled
sts3://servername:33333
Crawled
http://servername:33333
Crawled
http://sharepoint.portal.fake.com
The Crawler could not communicate with the server. Check that the server is available and that the firewall access is configured correctly.
I double checked for typos above and I don't see any so this should be an accurate reflection.
One thing to remember is that crawling SharePoint sites is different from crawling file shares or non-SharePoint websites.
A few other quick pointers:
the sps3: protocol is for crawling user profiles for People Search. You can disregard anything the crawler says about it until you're ready for user profiles.
your crawl account is supposed to have access to your entire farm. If you see permissions errors, find the KB article that tells you the how to reset your crawl account (it's a specific stsadm.exe command). If you're trying to crawl another farm's content, then you'll have to work something else out to grant your crawl account access. I think this is your biggest issue presently.
The crawler (running from the index server) will attempt to visit the public URL. I've had inter-server communication issues before; make sure all three servers can ping each other, and make sure the index server can reach the public URL (open IE on the index server and check it out). If you have problems, it's time to dirty up your index server's hosts file. This is something SharePoint does for you anyway, so don't feel too bad doing it. If you've set up anything aside from Integrated Windows Authentication, you'll have to work harder to get your crawler working.
Anyway, there's been a lot of back and forth in the responses, so I'm just shotgunning a bunch of suggestions out there, maybe one of them is on target.
I'm a little confused about your farm topology. A machine installed as a just a WFE cannot be an indexer. A machine installed as "complete" can be an indexer, query and/or a wfe...
Also, instead of changing the default content access account, you may want to add a crawl rule instead (once everything is up and running)
Can you see if anything helpful is in the %commonprogramfiles%/microsoft shared/web server extensions/12/logs on your indexer?
The log file may be a bit verbose, you can search for "started" or "full" and that will usually get you to the line in the log where your crawl started.
Also, on your sql machine, you may be able to get more information from the MSScrawlurlhistory table.
Can you create a content source for http://www.cnn.com and start a full crawl? Do you get the same error(s)?
Also, we may want to take this offline, let me know if you want to do that.
I'm not sure if there is a way to send private messages via stackoverflow though.
Most of your issues are related to Kerberos, it sounds like. If you don't have the infrastructure update applied, then Sharepoint will not be able to use kerberos auth to web sites w/ non default (80/443) ports. That's also why (I would bet) that you cannot access CA from server 5 when it's on server 4. If you don't have the SPNs set up correctly, then CA will only be accessible from the machine it is installed on. If you had installed Sharepoint using port 80 as the default url you'd be able to do the local sharepoint crawl without any hitches. But by design the local sharepoint sites crawl uses the default url to access the sharepoint sites. Check out http://codefrob.spaces.live.com/blog/cns!7C69E7B2271B08F6!363.entry for a little more detail on how to get Kerberos & Sharepoint to work well together.
In the Services on Server section check the properties for the search crawl account to make sure it is set up, and that it has permissions to access those sites.
Thanks for the new input!
So I came back from my weekend and I wanted to go through your pointers and try every one and then report back about how they didn't work and then post the results that I got. Funny thing happened, though.
I went to my Indexer (servername5) and I tried to connect to Central Admin and the main portal from Internet Explorer. Neither worked. So I went into IIS on ther Indexer to try to browse to the main portal from within IIS. That didn't work either and I received an error telling me that something else was using that port. So I saw my old website from the previous build and I deleted it from IIS along with the corresponding Application Pool. Then I started the App Pool for the web site from the new build and browsed to the website. Success. Then I browsed to the website from the browser on my own PC. Success again. Then I ran a crawl by the full URL, not the servername, like so:
http://sharepoint.portal.fake.com
Success again. It crawled the entire portal including the subsites just like I wanted. The "Items in index" populated quickly and I could tell I was rolling.
I still cannot access the Central Admin site hosted on servername4 from servername5. I'm not sure why not but I don't know that it matters much at this point.
Where does this leave me? What was the fix?
I'm still not sure. Maybe it was the rebuild. Maybe as soon as I rebuilt the server farm I had everything I needed to get it to work but it just wouldn't work because of the previous website still in IIS. (It's funny how sloppy a SharePoint un-install can be. Manual deletion of content databases, web sites, and application pools seem necessary and that probably shouldn't be the case.)
In any event, it's working now on my "test" farm so the key is to get it working on the production farm. I'm hopeful that it won't be so difficult after this experience.
Thanks for the help from everyone!