SEO and Webdesign: hyphens versus underscore inside URLs

For some time the seo specialists (and a lot of other guys that have some idea about search engine optimization) has recommended that URLs on the page should contain hyphens instead underscore because the some search engines were considering hyphens as word separators but the underscores were just letters inside words. But lately, at least since 2007 for Google, all main search engines are considering both as word separators. Some of them considered underscores as separators from the beginning. There are people that still don’t know that: I’ve seen recent articles (written at the end of last year) that still treats underscores as letter inside words.

As a web developer I started to use underscores to separate words from the beginning because of the language: I am generating the URL from the title or the article (whatever the content – no matter if it’s a blog, a news portal or an online store) and there are a lot of words that uses hyphens as the grammar requires in the Romanian language (I could give you some examples, but it would be meaningless for the most of you) and even some names contains hyphens.

As I told you in the earlier articles, the best URL is the one that’s readable by both the humans and the search engines and it’s the clearest and easiest to understand. From my point of view, these conditions are accomplished using underscores as word separators and the hyphens only where they are needed.

WordPress, for example, started from the beginning with the hyphens as word deliminator and they are stick with it. That’s a good thing, even sometimes I am not sure if the names in the titles are of two individuals or one name created from two other names put together.

 

The source is here.

SEO and Webdesign: the URLs and internal linking for a site

 

There are a few things about the internal linking for a site after talking about the friendly URLs. The first is about canonical URLs. Canonical essentially means standard or authoritative, so a canonical URL for search engine marketing purposes is the URL you want people to see. Canonicalization is the process of picking the best URL when there are several choices and it usually refers to home pages. For example, most people would consider these the same urls:

  • www.webdesign-software-code-seo.com

  • webdesign-software-code-seo.com/

  • www.webdesign-software-code-seo.com/index.html

  • webdesign-software-code-seo.com/home.asp

They look very similar, but technically all of these urls are different. A web server could return completely different content for all the urls above. When Google “canonicalizes” a URL it tries to pick the URL that seems like the best representative from that set. Depending on how your web site was programmed or how your tracking URLs are setup for marketing campaign, there may be more than one URL for a particular web page.

The problem most search engine marketers run into deals with domains when they are not setup properly. In these cases the domain URL without the www prefix (e.g. webdesign-software-code-seo) and the domain URL with the www prefix (e.g. www.webdesign-software-code-seo.com) are considered individual web pages. Since both pages may be indexed by the search engines you could get hit for duplicate content and at the very least you would be splitting your link popularity.

The easiest way to protect your site is to redirect all forms of URL your domain to one standard URL – a canonical URL. For example to force the use of www.webdesign-software-code-seo.com instead webdesign-software-code-seo.com or http://webdesign-software-code-seo.com you must have this lines in the .htaccess file (this is Apache specific, if you use IIS the lines should be the same using ISAPI filter).

RewriteEngine On

RewriteCond %{HTTP_HOST} ^webdesign-software-code-seo.com$ [NC]

RewriteRule ^(.*)$ http://www.webdesign-software-code-seo.com/$1 [R=301,L]

This response to the problem is resolved on the server side, just before the script is accessed. One other way to consider the problem is for the web designer to define inside the page a variable that, in our case, is http://www.webdesign-software-code-seo.com and to use is to construct all the URLs inside the page.

 

The previous part of this article says about one way to create duplicate URLs. Another way to obtain such URLs is to when they are generated dynamically and there are flaws in the script. For example, when you generate the URLs from only the title and you put the same (or very similar) titles to your articles you can get the same URLs. I am thinking the words separators (hyphens and underscores, but this is another subject and I will write about it later).

There are websites (like the ones create with the wordpress blogging scripts) that can generate multiple URLs for the same article. Some examples are the following ones:

Although it makes sense to enforce unique URLs for all your web pages inside any site as Google imposes a penalty on suspected duplicate content, it’s not always possible to do so. Google have been introduced a way to specify the unique URL (the canonical URL) of any web page by using a meta link tag. Setting this tag is simple: you first decide which URL you wish to use, and then add this link tag to the <head> section of your HTML page something like the next example.

<link rel="canonical" href="http://www.webdesign-software-code-seo.com/my-blog-post-number1" />

Google claims that this would not be treated as a directive, but a hint that would be honored strongly. Additional URL properties, like PageRank, would be transferred as well.

 

The next important thing are the bad URLs, I mean the URLs on the sites that can not be found, clicked, visited or submitted to social media. It may be a technical problem (server is down, the internet access with problems, etc.) but it also may be a design/maintenance problem: indexed links may disappear from time to time (temporary or for ever).

The technical problems can be solved (and the most of them are solved in time), but the other problem depends entirely by you. When you are designing a page make sure that the URLs in the site does not have flaws and all them are working ok before putting the site online. It’s no fun for a human to have a list of articles on the page and a lot of content in that articles and when he/she tries to access them find a 404 error page because the URL was not created all right, especially when from other part of the site the article can be accessed ok. The search engines acts in the similar ways: if there are bad URLS (404 error pages) the will not index it.

When an article is deleted (from some reasons, depending of the owner of the site), there are 2 ways to react to that from the designer’s point of view: the URL will completely disappear (and the search engines will not find an already indexed URL) or the URL will remain the same and the content will be replace with a message similar with “this article has been erased”. From the search engines neither way is perfect, but URLs appearing and disappearing periodic with no message is not something good.

From the seo point of view, bad URLs are also the one that contains special characters (spaces, apostrophes and other characters like %e2%80%93), but that can be corrected by generating or redirecting to friendly URLs.

 

The internal link structure of a website works in the same way that external links does: in order to build a quality internal link structure a site has to add links on it’s sub-pages and these links need to contain the keyword or search term that is being targeted. There is no limit on how many links from internal sub-pages can be created… there can be 10 pages or 10 billion pages. When the search bot sees these links all pointing to a specific page (no duplicate URLs and no bad URLs) it is going to read the provided anchor text in those links. It will then assume that the page being linked to is important for the anchored keywords.

Deep link building is a good way to separate great websites from the mediocre ones: linking to sub-pages of a website is one of the best ways to grow long tail search traffic. It also is a great way to built up the site authority.

The first step when building deep links is to define the target (or the targets) of the site and them keep them focused all the time. For example, this site is dedicated to web design, software code and seo and you will not find here anything other than that. Usually the web sites are custom to their owner needs, and they have only a few targets (a limited number of subjects for the content and keywords). The bloggers can write about they want, but even they have to customize the subject and the contents of their articles accordingly to what people is searching.

The second step in the process requires the identification of sub-pages that can be used to gain more search traffic and can help boost top level keyword rankings. Top level keywords are very competitive and they usually require substantial link building campaigns to move up in the rankings. A great way to boost the authority of pages that are targeting these competitive search terms is to increase the internal authority of those pages. One way to do this is by adding a contextual link to the sub-page that links to the competitive top level page. Adding deep links that point at these pages will boost their authority. The authority of you site will increase together with the the authority of the pages inside your site.

In one sentence, make the important pages on your site as visible as possible (inside the site and on other pages). For the first part the web designer is the person that can help you (a smart one, with seo knowledge, can even give you some suggestions) and for the second one you should aks the persons that do marketing (or seo marketing) to do this job.

 

The source is here.

SEO and Webdesign: SEO friendly URLs

In the very beginning of this article I must tell you that an URL comes from Uniform Resource Locator (the initials) that is a Uniform Resource Identifier (URI) that specifies where an identified resource is available and the mechanism for retrieving it. URI is a string of characters used to identify a resource or a name on the internet. Such identification enables interaction with the representations of the resource over a network using specific protocols.

It is said that in popular usage and in many technical documents and verbal discussions the URL is often incorrectly used as a synonym for URI. I’ve also met a lot of people that are using the term and they don’t know what’s coming from.

The way you create the URLs inside you page is one of the most important methods to improve the search engine optimization. Friendly URLs means they should be readable by the humans, but this counts for the search engines as well.

There are two things that are important: the first one is the human point of view. How many of you will access an URL that looks like http://www.webdesign-software-code-seo.com/8548-954page.html and how many will access http://www.webdesign-software-code-seo.com/freelancers-wanted/ or http://www.webdesign-software-code-seo.com/freelancer-jobs/? The tendencies are overwhelming for the second type of URLs because they contains words that have meaning for us and that gives clues about what the page contains.

That’s why it is better to use words in the URLs even for the search engines (the second point of view), especially if they are keywords (I mean the words that are the most relevant inside the content and the subject of the article, that the users may search on the internet). The best way to make a friendly URL is to create it based on the title of your article (and the date if this helps to identify faster the URL you are searching on a big site) and the underline the keywords using the heading tags. The search engines see the all this elements and it helps to index the page easier and to improve it’s ranking.

The friendly URLs are always static URLs. There are a lot of sites that generate URLs like this: http://www.some-site.com/index.php?category=456&subcategory=12&article=44574 and that’s called dynamic URL. When spaces, apostrophes and other special characters (like %e2%80%93) appears in this dynamic URLs it’s even worse: when you try to put such link on the Facebook or StumbleUpon (or any social network) there are a lot of chances that the URL will appear broken.

From the search engines point of view the static URL will be indexed much easier than the dynamic form, and there is no confusion and no missing parts from what could be important inside the URL.

Also, one of the most important things is to keep it as shorter and as descriptive as you can (clean and simple). The shorter the URL is the more successful it will be, as for web rankings and for the visitors’ use (this includes people copying the URL for link purposes). Also the twitter URL posting (for sharing information as well for backlinking) have became a habit in the last few years and a longer URL can not be posted there. More, if a URL string is long the weight and relevance of each word inside is diluted. If your URLs are keyword-rich, including too many other words or phrases means that the importance of the keyword is lost amidst all the other words.

For example consider the following URLs:

www.readmybooks.com/horror-science-fiction-temporal-travel-book.htm
www.readmybooks.com/store/books/science-fiction/horror-temporal-travel/book6888767.htm

The first URL is succinct, has the relevant keywords and no surplus syntax to dilute the importance of these words, and is easy for somebody to read, copy, and paste. The second URL is more complicated and the relevance of the keywords are reduced.

URLs can be constructed using upper and lowercase or a mixture of both, and the servers make the difference between them. Mixing uppercase and lowercase (even when following standard grammar rules such as using a capital letter for a name) can make your site structure unnecessarily complicated. You also run the risk of losing visitors who forget to use the required capital letter through the URL and then can’t access the page. The most common way to post the URLs across the internet is to post them lowercase. If you are redeveloping your URLs and come across pages using uppercase you should create a permanent 301 redirect to a lowercase version to avoid confusion.

There are a few tools and methods used for the URL rewriting in order to have the friendly URLs, one of the most common is using the .htaccess file (this is the special file that sets up the deal for you, it can contain all sorts of directives for the Apache server. If you’re not using an Apache-based server, you’ll have to read your server’s manual on how to do it) put in the root directory of the site. It works perfectly with the Apache server and php (for example), so it works fine with all the php-based systems. I will give some examples how to work with .htaccess and the other tools later.

Finally, it is very important when developing and maintaining a site is that you must keep the same structure of the URL. Do not change the rules to generate the URLs after the site is online, especially if they are created dynamically for the entire site. Once one URL is online it must be the same as long as possible (until the site drops dead or the internet ends). The search engines do not like the sites whose URLs changes from one month to another (for example), so try to change (even correct when necessary) them as little as possible. Also try to keep the URLs the same on the entire site, not depending by it’s sections, this will make future development much easier as there will be a standard convention to follow… if your URL structure is globally used the visitors will also find it much easier to understand how the information is organized and stored and they will find what they are searching faster.

 

 

The source is here.

Despre webdesign-software-code-seo.com

Scopul paginii este simplu: de a împărtăşi experienţa (mea sau a oricărei persoane interesată să scrie acolo) despre web design, dezvoltarea de programe de calculator (software) şi optimizarea pentru motoarele de căutare. Dacă veţi fi încîntaţi de pagină şi de soluţiile oferite ne veţi putea contacta pentru a vă creat programe la comandă sau pentru a vă îmbunătăţi pagina web (ca şi aspect, sau design dacă vă plac englezismele, sau sfaturi pentru seo).

Veţi putea alege oricare dintre feelancerii din echipă pentru proiectul vostru, dar toate comenzile se vor face prin Supravirtual SRL.

Limba paginii este, cel puţin pentru moment, engleza. Poate că în viitor vom scrie şi în alte limbi.

Pagina este aici.

About webdesign-software-code-seo.com

The purpose of this page is simple: to share our experience (mine and whoever is interested to join me here) about the web design, the software developing and the search engine optimization. If you are pleased with this page and the solutions offered here you may contact us to create custom software for you or to improve your web page (as design or as seo advice).

You may select any of the freelancers in the team for your project, but all the orders will be through a company called Supravirtual SRL. That’s a Romanian company if you need to know that (I know that some will not like that, it will be entirely your problem), but we offer only the best quality to our customers. Just try us.

 

The page is here.

SEO and Webdesign: The content of the webpage

One of the most important parts of the search engine optimization is the content. From the human point of view, if you do not have relevant content nobody will stay too long on the page. From the search engines point of view the keywords and the heading tags that are organizing the text are most relevant.

Once you have your meta tags setup correctly you will want to move on to the page content; here you will have to understand the H1 to H5 title tags and the value of these tags on the page. Additionally your content will need to be rich and unique to some point where it is friendly for humans as much for the search engines, being rich and unique is the reason all of them will remember your web site. What’s the point of lots of people accessing your site if all they see is a number of words that mean great things to the search engines but almost nothing to them because they are very common and they’ve met the same sentences and phrases in 10 other places?

The easiest and most basic SEO rule is that the search engine spiders can be relied upon to read basic body text 100% of the time. By providing a spider with basic text content, you offer the search engines information in the easiest format for them to process. Some search engines can strip text and link content from flash files, but nothing can compare with the basic body text when it comes to providing information. You can always find at least one way to add text into a site without compromising it’s functionality, looks and feel.

The content itself should be thematically focused, it means you should keep it simple and clear. When you want to approach multiple topics try to put them on different pages inside the site, it will be clearer for the humans as for the spiders. Keep always in mind that unique topic-focused content for the pages inside your site is a basic seo techniques to make everything simpler but also to improve the quality of your content.

The ways (and some suggestions) how to write the content is another subject, I will approach it some other time. But one thing is for sure: not all of us are gifted to such thing, some are better than other. If you are not satisfied with the content of you site or you don’t know how to write it ask somebody that knows. There are a lot of bloggers and copywriters (people that are writing web content for money) in this big world that can help you.

Heading Tags

The heading tags used to be only the responsibility of the web designer, but since the development of the wysiwyg editors (what you see is what you get editors) you can format you own content a lot better than before. You’ve seen them in wordpress, blogger, blogspot, in any blogging system by the way.

I’ve written already about the choosing of the keywords. Use the heading tags to underline the keywords: many search engines place more emphasis on the text within heading tags, so make sure they use keywords. Use one <h1> tag per page with the most important keywords and use the other head tags (<h2>, <h3>, etc) to provide variations and support the main heading.

 

<h1>Books</h1>

<h2>Romantic books</h2>

<p>… some list about the romantic books and their love stories…</p>

<h2>Adventure books</h2>

<p>… Robin Hood, Robinson Crusoe, The lord of the rings…</p>

<h2>Horror books</h2>

 

Stephen King and all the others writers…

Page text and the rest of the content

 

The relevance of your pages is given in some way by the keywords and common phrases the people might find inside them so be sure your pages contain that. Write accordingly to the thematic of the site, but write about what interest others. Be careful about the frequency of your keywords… you want to have them occur at least a few times if possible, but don’t repeat them so much that the copy becomes unnatural. It is important to discretely spread keywords around without making it obvious. Neither keyword should take up more than 12-24% of the entire body text, it is often considered as spam.

Also remember that text contained within images will not be picked up by the search engines, they still can not read that. Only actual text on the page will be indexed.

Do not use too much content right on the front page: the loading time of the page and the redundancy of additional words used will reduce the chance of showing up in relevant searches.

HTML Code Page

In case your website uses a language different than the default of the search engine which your target audience prefers (for example: someone for America or Asia finds something on the web page from Romania) or your website uses special characters unique to that language, make sure to implement the proper HTML codepage tag. The Unicode versions of special characters (HTML encoded characters) are more or less impossible to look up for now in most search engines.

 

Lately, I mean the last few years, when I design pages for my clients I am always telling them to write as much and as good as they can. Mainly my job finishes when the page is fully functionally and the maintenance I do in time is limited to the scripts, and almost nothing about the content of the page. That depends of the project, of course, but I had the chances to see clients adding products to the sites with only few words as description, nothing special. That pages did not have many visitors, only a few people found them using the search engines and the most part of them visited the pages because the advertisement in the “real” life.

Remember this: the content of your page is the best advertiser and the search engines through the structure of the site are it’s tools.

 

The source is here.

SEO and Webdesign: HTML HEAD Content

Every search engine has it’s own rules to process the html tags inside the head tag. Following next are some some general rules.

The <Title> Tag

First, the title of the html page should be relatively short and describe the page content accurately. Wherever possible, try to include keywords but without distorting the true purpose of the title.

It is one of the most important elements of SEO when it is used properly. A website can increase organic search traffic for each page by using an appropriate keyword within this tag, but this is only effective though when the actual content of a page is about that specific keyword. Search engines are indexing the content of the page as the meta tags and they can easily figure out whether or not the title tag is appropriate for the content contained on a specific page. In order to get full benefit of this tag, the keyword used within it should appear in its exact form within the content of the page. This is the best way to relay to the search engines exactly what a page is about and what keywords it should rank for within the search engine result pages.

The domain name shouldn’t be repeated in the title, it is often considered as spam. Also, it should not be used the same filename as the title tag or the same filename as the domain name, it is often considered as spam.

The title should not be any longer than 70-100 characters including spaces. (Google – DMOZ)

The title should bot be any longer than 60 characters. (scrubtheweb.com)

The title should not begin with the domain name, it is often considered as spam.

The <Meta Keywords> Tag

The meta keywords tag is the least important metatags because the majority of current search engines no longer support these tags and they place little to no importance on them, but it is good to have them filled in on each page. Use the description and keywords metatags in the head of each web page and make these tags different on each web page. The search engines does not like duplicate metatags.

First of all, keep the keywords as descriptive as posibile. It should not used words that are not present in the body of the page (for example: do not enter movies in the keywords tag when the page is only about books). Redundant characters will not hurt overall results, however words after the first 300 characters rarely help in any way.

The first keyword should be the main keyword that the content is based around, this keyword generally appears within the title of the article or page content. The second keyword should be the second most important keyword that the content is based around. Then you should list the next four important keywords or variants of the main two keywords. This has the potential to help a search bot to understand what the content of each page is about.

The tag should not be any longer than 378 characters (searchenginewatch.com referring to Google).

The tag should not be any longer than 268 characters (AltaVista).

Start all keywords with capital letters. (Relevant only on alphabetical listings)

Separate keywords with the “, ” (comma, space) character combination. (Most search engines use either character as the separator ).

You may use phrases as well, but it should not be used any word, not even within phrases, more than 3 times. It is often considered as spam.

The <Meta Description> Tag

This is a very important element of SEO, it allow a unique opportunity for websites to convince searchers to visit their sites and a particular page on the site. First of all, keep the description as descriptive as posibile, but the descriptions on some web pages are often a snippets of content that appear below the clickable titles of a search listing. Creating high quality meta descriptions can improve click through rates in the search results, also creating a poor description can be one of the quickest ways to get passed over by searchers within the search results.

It should not be any longer than 100 characters (Google).

It should not be any longer than 25-30 words (DMOZ).

It should not be any longer than 150 characters (scrubtheweb.com).

It should not be any longer than 200 characters (searchenginewatch.com referring to Google).

 

The source is here.

SEO and Webdesign: Choosing your Keywords

The keywords are relevant for the search engines in two places inside the site: the <Meta Keywords> Tag and the content of the site. When you write content keep in mind a very specific idea and choose your keywords before you start to write, that makes an important difference between creating an optimized site from scratch and the work needed for later optimization.

A research done by Entireweb says that 31% of people enter 2 word phrases into search engines, 25% of all users look for 3 word combinations and only about 19% of them try their luck with only a single word.

Do not choose a keyword to optimize your site for that you don’t have the slightest chance of ranking good with because of the fierce competition.

Do not choose a keyword that nobody looks for. Make a simple comparison: how many people searches for differential equations versus shoes versus porn (or sex). So use only generally popular keywords if you do not need targeted traffic.

Do not choose a keyword that does not relate strongly enough to your content.

Do not use words that may get your site filtered or banned from search engines.

Do not use images with file names or ALT tags (Alt attributes of IMG tag) that may get your site filtered or banned from search engines.

Use lots of relevant content, well laid out into separate pages.

For best results optimize one page for one keyword.

Do not post online half-finished sites.

 

The source is here.

SEO basics

The purpose of search engine optimization is to make a website as search engine friendly as possible… so it can be easily said that basic SEO is all about common sense and keeping the web page very simple. It’s really not difficult, you only need to understand how search engines work. The problem with a lot of web developers is they does not consider the search engine optimization as a part of designing a site so they just ignore this part… I have some examples to give, what I’ve met in the last years, but they were improved in time or became offline.

Well, the exclusion of seo from web designing is wrong: why creating a great looking web site if nobody will find it? Of course, the word about it’s existence will spread and you’ll have some visitors in time, but usually the most visits come from the search engines. That’s why, from my point of view as a web designer, seo is one of the main things to think about when creating the structure of the website.

In the beginning there are two aspects of search engines to consider: the first is how spiders work, the second is how search engines figure out what pages relate to which keywords and phrases.

The electronic spiders (known also as bots) are the tools used by the search engines to collect data about a website, they copy its content which is stored in the search engine’s database. They are designed to follow links from one page to the next, to record these links, to assimilate the content of the web pages and to send other bots to gather data from the linked pages. The process goes on forever, 24 hours a day, 7 days a week, every week. By now the databases of the search engines measure their size in thousands of billions records.

It can easily be assumed that the spiders will find your site on their own if there are links on the web targeting it, so it is not need to submit the site to the major search engines manually or electronically. Search engines have the ability to judge the topical relationship between pages that are linked together, so the most valuable incoming links of the web come from the sites that share topical themes.

What you have to prioritize about seo is the ways to help the spiders to find your websites and to traverse it from A to Z. So you have provide easy to follow text links directed to the most important pages on the site in the navigation menu on each page, one of the links should lead to a text-based sitemap containing a text link to every page on the site. The sitemap can be the most basic page in the site (with no decorations and interactive parts for the visitors) as its purpose is more to direct spiders than help lost site visitors. As a designer you should also keep in mind that Google also accepts more advanced XML based sitemaps which can be read about in their Webmaster Help Center.

There might be parts of your site you don’t want to be visited by the spiders, in this case you can restrict the free access using “robots.txt” files. The spiders will not add in the search engines databases the files you will add in the robots.txt files, but they will have to be organized in a specific format.

There are two very important parts of search engine optimization: offering the spiders access to the site and the content of the site. Having an optimized page depends of both of them, there can not be one without other. You can be the best writer in the world, but if nobody reads your work you are nobody, but if you write nonsense and everybody is able to read you but nobody is really interested you are still nobody. On the internet the search engines are supposed to provide their users with lists of pages that relate to the search terms people enter in their search box. So these search engines need to determine which thousand of billions of pages is relevant to a small number of specific words… in order to do this, they needs to know your site relates to those words and they assimilate the content of the site.

When examining a page the search engines are looking for a few elements: the url of the site, the site tile, the description meta tag (the last two elements are found in the head section of the source code), then if follows the content of the site (the spiders read the pages just the same way the Europeans read the papers: from the left to right and from top to bottom, following the columns). Every part is important and they will be treated separately later on the site.

Finally, for this article anyway, the search engines operate by funds as well: the funds are collected from selling advertisement placement, selling listings or both. They sell out the area above or in the right of the search results as advertisement space and displays only relevant results based on what the users were looking (the search is applied to the advertisements database as well). Their logic is that most people will be mislead by the placement of such links and will choose them instead of the actual results.

That search engines that operate by selling listings will not show your website within the results regardless of its relevance unless you sign up for their service.

There is always the option to index your site on the directories the search engine company buys information from. However, such directories like Yahoo.com, DMOZ.org, Business.com, Best of the Web and others are moderated based on relevance and content, thus getting listed there may take some time and efforts.

Most search engines rank your website by relevance which is measured by the threshold of keywords. Some search engines, such as Google as well, will sort even relevant sites by their popularity, measuring the page rank by the actual links leading to the site. Also, the text or ALT TAG text accompanied with these links will influence the keywords the website is shown in the listings for.

Some search engines will consider a website more and more popular when they are clicked on the results page. These include AltaVista.com .

What you must remember if you start working in this field is that there are a lots of way you can do things in seo (they vary a lot) and the possibilities and combinations are infinite. You may experiment with any campaign freely, but only with a relative knowledge of the DONTs (what you MUST NOT DO) you will be able to maintain a sustainable development plan for your website.

 

The source is here.

Opera si imaginile relative

Nu am prea apucat să notez pe aici chestii legate de ceea ce fac, dar este momentul să încep să fac asta în perspectiva lansării noului domeniu pe care voi începe în curînd să scriu mai multe despre webdesign, software code şi seo.

Pentru început trebuie să menţionez că nu prea folosesc Opera. Pînă acum nu am folosit-o decît cel mult ocazional, nefiind unul dintre navigatoarele mele obişnuite. Recent am început să am de-a face cu el pentru că unul dintre clienţi îl foloseşte în mod constant (e vorba de o întreagă echipă), aşa că am avut deosebita “plăcere” să observ că imaginile din pagina pe care o făcusem nu erau afişate cum trebuie. Dacă în Firefox, Chrome şi Internet Explorer apăreau exact unde trebuie, unde se dorea, în Opera cădeau (dacă se poate spune aşa mutarea în partea de jos etichetei div în care erau introduse) din poziţia dorită.

Motivul este simplu: Opera are o “mică” problemă cu imaginile la care este atribuită proprietatea position: relative. Nu ştiu dacă este un bug sau o scăpare intenţionată, dar aşa ceva poate da peste cap afişarea unei pagini care pare în regulă. Soluţia este simplă: se stabileşte position: absolute şi, folosind proprietăţile top şi left se poziţionează în interiorul etichetei div acolo unde doriţi.

În CSS există 4 modalităţi de poziţionare a oricărui element dintr-o pagină web: static, absolute, relative şi fixed. Diferenţele de poziţionare sînt următoarele:

poziţia statică (position: static) este poziţia predefinită a elementului, iar dacă nu este stabilită o altă poziţionare vafi afişată pe ecran pe baza locului în care apare în documentul html. Dacă veţi dori să aplicaţi asupra elementului alte proprietăţi precum left şi top (stabileşte poziţia în funcţie de marginea din stînga şi de sus a elementului superior) ele vor fi ignorate.

poziţia absolută (position: absolute) este cea mai facilă poziţie: elementul este scos din poziţia sa în cadrul celorlalte elemente ale paginii (dar fără să le afecteze în vreun fel) şi este poziţionat pe o locaţie exactă de pe pagină. Pot fi aplicate alte proprietăţi precum left, top, right şi bottom (stabileşte poziţia în funcţie de marginea din stînga, de sus, din dreapta şi de jos a elementului superior).

poziţia relativă (position: relative) are ca reper poziţia în care este elementul în cadrul elementelor încarcate pe pagină şi foloseşte left, top, right şi bottom pentru o poziţionare mai precisă.

poziţia fixă (position: fixed) este foarte asemănătoare cu poziţia absolută, este calculată în funcţie de marginea ecranului dar elementele fixe rămîn aşa mereu. Toate celelalte elemente vor fi afişate deasupra lor (de exemplu: un timbru-watermark sau imaginea dintr-un fundal).

Articol preluat de aici.