Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
There might not be a right answer, so opinions are welcomed.
My site has a section for a product catalog /catalog and products are listed in that directory /catalog/product-name. Should associated files and images be visible as a sub-directory (/catalog/product-name/image/img1.png), in the same directory (/catalog/product-name/img1.png), or in a central directory (/images/img1.png)?
This is entirely for the sake of SEO structure since the images are stored as blobs in RAM and accessed using a Hash table.
Also, my initial goal was to allow the same image to be accessed using a multitude of names (ex: product-name-profile-shot.png would be an alias for img1.png), but with there being no form of canonical linking for images, do I run the risk of looking spammy if the same image appears in multiple locations with different urls?
As per my experience, URL structure really plays an important role in internet marketing. Structured data categorized based on theme are really helpful while you optimize your website for organic search. You can refer this simple static website that is organized in same way you mentioned yogacurious.com, just check categorization and its presence in Google search. User friendly categorization can result in better search presence !!
Regarding to image alias, i will not prefer to go for it. I recommend using same image with different alt tag by considering content and product page on which you are placing the link.
All the best
You absolutely do not need to alter the url structure of resources. URL structure of pages is important. Resource url structure is not, and will not affect SEO.
I wouldn't create aliases for images since there is no justification for doing so from an SEO perspective.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am working on a scraping project for a company. I used Python selenium, mechanize , BeautifulSoup4 etc. libraries and had been successful on putting data into MySQL database and generating reports they wanted.
But I am curious : why there is no standardization on structure of websites. Every site has a different name\id for username\password fields. I looked at Facebook and Google Login pages, even they have different naming for username\password fields. also, other elements are also named arbitrarily and placed anywhere.
One obvious reason I can see is that bots will eat up lot of bandwidth and websites are basically targeted to human users. Second reason may be because websites want to show advertisements.There may be other reasons too.
Would it not be better if websites don't have to provide API's and there would be a single framework of bot\scraper login. For example, Every website can have a scraper friendly version which is structured and named according to a standard specification which is universally agreed on. And also have a page, which shows help like feature for the scraper. To access this version of website, bot\scraper has to register itself.
This will open up a entirely different kind of internet to programmers. For example, someone can write a scraper that can monitor vulnerability and exploits listing websites, and automatically close the security holes on the users system. (For this those websites have to create a version which have such kind of data which can be directly applied. Like patches and where they should be applied)
And all this could be easily done by a average programmer. And on the dark side , one can write a Malware which can update itself with new attacking strategies.
I know it is possible to use Facebook or Google login using Open Authentication on other websites. But that is only a small thing in scraping.
My question boils down to, Why there is no such effort there out in the community? and If there is one, kindly refer me to it.
I searched over Stack overflow but could not find a similar. And I am not sure that this kind of question is proper for Stack overflow. If not, please refer me to the correct Stack exchange forum.
I will edit the question, if something there is not according to community criteria. But it's a genuine question.
EDIT: I got the answer thanks to #b.j.g . There is such an effort by W3C called Semantic Web.(Anyway I am sure Google will hijack whole internet one day and make it possible,within my lifetime)
EDIT: I think what you are looking for is The Semantic Web
You are assuming people want their data to be scraped. In actuality, the data people scrape is usually proprietary to the publisher, and when it is scraped... they lose exclusivity on the data.
I had trouble scraping yoga schedules in the past, and I concluded that the developers were conciously making it difficult to scrape so third parties couldn't easily use their data.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I am building a sitemap file generator and have been reading about the various limits. (50,000 URLs per sitemap and 50,000 sitemap files per index file).
I have already been building this with the strategy of organizing my sitemap files similarly to how the links are organized on the actual site. However, I am noticing that in time I will likely need to restructure due to the limits mentioned above.
So, I am now thinking that alternatively I will store every possible link/url in a DB table and then just run a cron job which generates XML files, one per every 50,000 URLs I have. This approach is more easily scalable but also lacks any organization. I am curious if any SEO experts out there know if this matters to google, or if the URLs are all seen in the same light, and not by how they are grouped.
The purpose of a sitemap is simply to help Google fully understand the structure and layout of your website.
That said, as long as you are using a technique which effectively communicated to Google the layout of your site, you should be alright. Since you seem to still be communicating the right message about the structure of your site, this technique appears OK.
See here for more information.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I have 2 domains:
www.first.com
www.second.com
Lets assume that In the first one I have an online store, at the second one I have only products of this store (seperate applications that running on the server).
The products link is
www.second.com/firstProduct
www.second.com/secondProduct
www.second.com/thirdProduct
and etc...
I want to redirect users to the first website when someone hit www.second.com, ie not the full product path.
What redirect should I use? 301? In terms of SEO what is the best approach?
Thanks.
Yes, the 301 Moved Permanently, is the code you want to return for this redirect. Search engines typically will queue up 301's for updates to their results, as this indicates that the resource will now be found at the new url, and that the old one is soon to be obsolete.
In your case, since you never want www.second.com/ to be accessed directly, the 301 is exactly what you want.
You might also consider adding a robots.txt file with allow + disallow statements in there, as most of the bots you actually care about for SEO will honor it.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I am developing a large single page web application (SPA) using NodeJS for the back end. A few screens in the application (about, contact us, faq, news, etc.) are screens that ideally would be dynamically populated by the customer. Is there a simple solution to allow the customer to customize this content on an ongoing basis without having to redeploy the application?
I don't want to build the application as part of a CMS, as most of the pages do not follow this model. I really just want a small add on to manage these few screens.
I've looked briefly at the XML-RPC WordPress API. There's also the option to use a Google Spreadsheet as a simple CMS.
Has anybody used these or any other options in an SPA/NodeJS app? I would prefer a pre-canned Node module that I could just drop in my app, but I couldn't find any in my searching.
I ended up going the Google Spreadsheet route. It was fairly simple to load rows from a spreadsheet with an AJAX request using the following URL syntax:
https://spreadsheets.google.com/feeds/list/<spreadsheetId>/od6/public/values
This article got me going in the right direction for parsing the data that comes back from the spreadsheet.
I was also able to have the spreadsheet reference image files that are also stored in Google Drive. To generate the URLs to those images, I used the format:
http://googledrive.com/host/<GoogleDriveFolderId>/<fileName>
For this solution to work, you have to publish the spreadsheets to the web and the folder that contains them has to be shared as public to those who have a link (not searchable, if you don't want to). The folder that contains the images has to be shared as public and searchable.
Overall, I think it ended up being a pretty simple, yet powerful solution to allow non-programmers to edit small bits of content on my site.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
We can create different urls for the site by using the alternate access mapping for different zones which are available. But I am trying to create two urls with in the same zone which is used for internal and external purpose. But I am unable to find the way to create. Is it possible to create? If so, can any one please explain?
As it was sad before:
You can only have one URL per Zone...
For two different URL you can use host site header collections instead.
This will give you a possibility to create different site collections with appropriate URL for each of them. Be aware that you cannot use no more "AAM" if you go to host header collections solution!
Hope it helps
You can assign different URLs for one mapping. You can only have one URL per Zone. See this example:
My default URL for the access mapping collection "SharePoint Portal" (a web application) is http://sp.dev the custom URL is http://blabla.com. You can edit these via using "Edit public URLs" in central administration.