Skip to content

How to Select Elements by Text in XPath

When scraping websites, you often need to select elements on the page based on their text content. This allows you to precisely target the data you want to extract. XPath, a query language for selecting nodes in XML and HTML documents, provides a few ways to do this using the contains() and text() functions.

In this guide, we‘ll take an in-depth look at how to leverage these text selection techniques in your XPath expressions. We‘ll cover the syntax, walk through examples, and discuss some best practices to help you effectively select elements by their text content when web scraping.

Using contains() to Select Elements Containing Text

The XPath contains() function allows you to select elements that contain a specific text substring. It takes two arguments:

  1. A node-set to search within
  2. The text substring to match

The syntax looks like:

//element[contains(text(), "substring")]

This will select all element nodes whose text content contains the specified substring.

For example, consider the following HTML:

<ul>
  <li>Apples</li>
  <li>Oranges</li>
  <li>Pears and Grapes</li>
</ul>

To select all <li> elements that contain the text "and", you would use:

//li[contains(text(), "and")]

This would match the third <li> element, "Pears and Grapes".

The contains() function is case-sensitive by default. To perform a case-insensitive match, you can use the lower-case() or upper-case() functions to normalize the casing:

//li[contains(lower-case(text()), "and")]

A key feature of contains() is that the substring match can span across child elements. For instance, in this HTML:

<p>
  Select <em>this</em> paragraph.
</p>

The XPath //p[contains(text(), "Select this")] would still match the <p> tag, even though "Select" and "this" are separated by the <em> child element.

Using text() to Select Elements by Exact Text

While contains() is useful for partial text matches, sometimes you need to match the entire text content exactly. This is where the text() function comes in. It selects elements based on their full text content.

The syntax is:

//element[text()="exact text"]

For example, with this HTML:

<div>
  <p>Hello world!</p>
  <p>Hello again</p>
</div>

The XPath expression //p[text()="Hello world!"] would select only the first <p> element. The second <p> element does not match, because its text content is not exactly "Hello world!".

Unlike contains(), the text() function only matches the direct text content of an element. It does not match text within child elements. For instance, //div[text()="Hello world!"] would not match anything in the above HTML, because the <div> itself does not directly contain the text "Hello world!". That text is within the <p> child element.

Like contains(), the text() function is case-sensitive by default. The same lower-case() or upper-case() workaround can be used for case-insensitive matching.

Combining Text Selectors with Other XPath Expressions

Text selectors become even more powerful when combined with other parts of XPath expressions, such as tag names, attributes, and positional selectors. This allows you to create very targeted selectors to drill down to exactly the elements you need.

For example, you could use the following XPath to select <a> elements containing the word "click" in their link text, but only if they also have the class "cta-button":

//a[contains(text(), "click") and @class="cta-button"]

Or this expression to select the third <p> element on the page, but only if its text content starts with "Introduction":

//p[starts-with(text(), "Introduction")][3]

By mixing and matching different XPath constructs, you can build very specific selectors to handle almost any web scraping scenario.

Text Selector Examples with Python Libraries

Let‘s look at some practical examples of using XPath text selectors with common Python web scraping libraries.

Example with lxml and requests

import requests
from lxml import html

# Send a GET request to the webpage
page = requests.get(‘https://example.com‘)

# Parse the HTML content
tree = html.fromstring(page.content)

# Select all <a> elements that contain the text "click me"
links = tree.xpath(‘//a[contains(text(), "click me")]‘)

# Print the href attribute of each selected link
for link in links:
    print(link.get(‘href‘))

Example with BeautifulSoup

import requests
from bs4 import BeautifulSoup

# Send a GET request to the webpage
page = requests.get(‘https://example.com‘)

# Parse the HTML content
soup = BeautifulSoup(page.content, ‘html.parser‘)

# Select the first <p> element that starts with the text "Introduction"
intro_para = soup.select_one(‘p[text^="Introduction"]‘)

print(intro_para.text)

Example with Selenium

from selenium import webdriver
from selenium.webdriver.common.by import By

# Launch a browser and navigate to the webpage
driver = webdriver.Chrome()
driver.get(‘https://example.com‘)

# Select the <button> element with the exact text "Submit"
submit_button = driver.find_element(By.XPATH, ‘//button[text()="Submit"]‘)

submit_button.click()

Tips and Best Practices

When using XPath text selectors for web scraping, keep these tips in mind:

  1. Be aware of whitespace in the text you‘re trying to match. Extra spaces or newline characters can cause your selectors to fail. Use normalize-space() to remove leading and trailing whitespace and collapse inner whitespace if needed.

  2. Pay attention to capitalization. By default, text matching in XPath is case-sensitive. Use lower-case() or upper-case() for case-insensitive matching.

  3. Avoid overly general text selectors, as they can match unintended elements. Try to combine text selectors with element names or attributes to make them more specific.

  4. Always test your selectors against real, current page content. Websites change frequently, so selectors that worked yesterday might fail today if the text content has been updated.

  5. If a website has inconsistent formatting or user-generated content, text selectors might be unreliable. In these cases, it‘s often better to use structural selectors based on element names, attributes, or position in the document tree.

Conclusion

XPath provides powerful ways to select elements based on their text content, using the contains() and text() functions. contains() is useful for matching elements that contain a specific text substring, while text() selects elements by their exact, full text content.

These text selectors are even more effective when combined with other XPath expressions to create highly targeted element selectors for web scraping.

Beyond just contains() and text(), XPath has several other useful functions for working with text, such as starts-with(), ends-with(), normalize-space(), and more. Invest some time in learning these and other key parts of the XPath syntax.

With a solid grasp of XPath text selectors, you‘re well on your way to being able to precisely target and extract the data you need from web pages. Happy scraping!

Join the conversation

Your email address will not be published. Required fields are marked *