Setup Selenium Web Browser Automation Using ASP.NET Core and Docker
In this interdisciplinary post, we explore how to use ASP.NET and Docker to set up an automation environment for testing with Selenium.
Join the DZone community and get the full member experience.
Join For FreeThere exist many kinds of test: unit tests, integration tests, acceptance test, UI tests, etc.
For this tutorial, we will look at UI tests. This kind of test allows us to validate the IHM by launching the browser, clicking on elements, and verifying the result.
Using UI tests, we can validate the behavior of an application on many browsers: Chrome, Safari, Firefox, Internet Explorer, etc.
We can also use a specific version of a browser, for example, IE9.
In this tutorial, we will not show you how to write a Selenium test in detail, but we will focus on browser automation in order to execute UI tests on a build environment.
So, let's go ahead and create an ASP.NET Core web project and an xUnit Test Project:
- install-package Selenium.WebDriver
- install-package Microsoft.AspNetCore.Hosting
- install-package Microsoft.AspNetCore.TestHost
- xunit
- xunit.runner.visualstudio
Docker File
WebApplication Dockerfile :
Create a Dockerfile inside the web project.
Here, I pull microsoft/aspnetcore:2.0-nanoserver-1709
as a base image.
Set the src
as the working directory, copy source, restore packages, build and publish on /app
and expose port 80 inside the container.
FROM microsoft/aspnetcore:2.0-nanoserver-1709 AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/aspnetcore-build:2.0-nanoserver-1709 AS build
WORKDIR /src
COPY SetupSeleniumAspNetCoreDocker.sln ./
COPY SetupSeleniumeDocker.UI/SetupSeleniumeDocker.UI.csproj SetupSeleniumeDocker.UI/
RUN dotnet restore -nowarn:msb3202,nu1503
COPY . .
WORKDIR /src/SetupSeleniumeDocker.UI
RUN dotnet build -c Release -o /app
FROM build AS publish
RUN dotnet publish -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "SetupSeleniumeDocker.UI.dll"]
Database Dockerfile:
FROM microsoft/mssql-server-windows-developer
ADD create-blogs-data.sql /initial-scripts/create-blogs-data.sql
ADD run-scripts.bat /initial-scripts/run-scripts.bat
RUN /initial-scripts/run-scripts.bat
create-blogs-data.sql
contains tests data, so I run the sqlcmd
command to execute the script. The following will create schemas and insert data.
cd \initial-scripts
sqlcmd -i create-blogs-data.sql -o output.txt
Docker-compose.yml File
Create a docker-compose.yml file.
- The web app service uses my web application Dockerfile: LogCorner.BlogPost.Core.web\Dockerfile
- The web app service depends on the db service. So Docker will pull and start the db service container before launching the web app.
- The db service depends on the base image microsoft/mssql-server-windows-developer, exposes port 1433 inside the container, and use the following credentials: username=sa; password=LogCorner!Docker1#.
version: '3'
services:
setupseleniumedocker.ui:
image: setupseleniumedockerui
depends_on:
- db
build:
context: .
dockerfile: SetupSeleniumeDocker.UI\Dockerfile
db:
image: setupseleniumedockerdb
expose:
- "1433"
build:
context: ./SetupSeleniumeDocker.DB
dockerfile: Dockerfile
environment:
SA_PASSWORD: "Password=LogCorner!Docker1#"
ACCEPT_EULA: "Y"
Docker-compose-override.yml File
Create a docker-compose.override.yml file: this file overrides the contents of the docker-compose.yml file (add and/or updates it).
Here I say that the ASPNETCORE_ENVIRONMENT is Docker so .NET Core will use the appsettings.Docker.json file.
I expose port 8080 outside the container and 80 inside the container.
I also override the db service and expose port 1433 inside and outside the container.
version: '3'
services:
setupseleniumedocker.ui:
environment:
- ASPNETCORE_ENVIRONMENT=Docker
ports:
- "8080:80"
db:
ports:
- "1433:1433"
networks:
default:
external:
name: nat
If everything is pulled, built, and started, I will connect to my web application using the following URL: http://localhost:8080.
The web application will connect to a SQL database using the ConnectionStrings
defined here (the SQL database is listening on port 1433).
ASP.NET Core Environment
{
"ConnectionStrings": {
"Value": "Server=db;Database=LogCorner.BlogPost.Core;User=sa;Password=LogCorner!Docker1#"
},
"Logging": {
"IncludeScopes": false,
"LogLevel": {
"Default": "Debug",
"System": "Information",
"Microsoft": "Information"
}
}
}
Here I configure ConnectionStrings
, so my db context will pick the database associated with ASPCORE_ENVIRONMENT
.
public void ConfigureServices(IServiceCollection services)
{
services.Configure<ConnectionStrings>(Configuration.GetSection("ConnectionStrings"));
services.AddSingleton<IAppSettings, AppSettings>();
services.AddScoped((s) => Configuration);
.........................
..........................
}
public class AppSettings : IAppSettings
{
private readonly IOptions<ConnectionStrings> _connectionString;
public AppSettings(IOptions<ConnectionStrings> connectionString)
{
_connectionString = connectionString;
}
public ConnectionStrings GetConnectionStrings()
{
return _connectionString.Value;
}
}
public static class ConfigurationManager
{
public static IAppSettings AppSettings { get; set; }
}
public class ConnectionStrings
{
public string Value { get; set; }
}
And I use connectionString.Value
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
if (!optionsBuilder.IsConfigured)
{
var connectionString = ConfigurationManager.AppSettings.GetConnectionStrings();
optionsBuilder.UseSqlServer(connectionString.Value);
}
}
As you can see, I set up the database at runtime using C# code. It is also possible to set up a database from Docker using a SQL Docker file and using it in a Docker-compose file. These two methods are equivalent due to the immutability of the container, which means when the container is stopped, the data is gone.
When I start a new container of a SQL server image, then the container is clean and I cannot find my data saved in it before. The container is like an object of a class (object=container, class=image).
To keep states between multiple instances of a container running, I can set up volume mapping between the host and the container. But I will not set up volume mapping here because I want to have a clean container before running tests.
Start Docker Containers and Setup Database
Finally, I can now :
- Run the
docker-compose build
command. - Run the
docker-compose up
command.
public class TestFixture : IDisposable
{
public readonly IWebDriver WebDriver;
public TestFixture()
{
//TODO : move to folder where is docker-compose file
//TODO run command docker-compose build
//TODO run command docker-compose up
WebDriver = new ChromeDriver(@"C:\Selenium\drivers");
WebDriver.Navigate().GoToUrl("http://localhost:8080/");
}
public void Dispose()
{
WebDriver.Dispose();
//TODO run command docker-compose down
}
}
public class HomePageTest : IClassFixture<TestFixture>
{
private IWebDriver WebDriver { get; }
public HomePageTest(TestFixture fixture)
{
WebDriver = fixture.WebDriver;
}
[Fact(DisplayName = "Application loaded successfully")]
public void WhenNavigatingOnTheHomeUrlThenTheHomePageIsDisplayedCorrectly()
{
var title = WebDriver.FindElement(By.ClassName("navbar-brand"));
Assert.Equal("SetupSeleniumeDocker.UI", title.Text);
}
}
See it in Action
Move to the folder where the docker-compose file is located and run the following commands:
docker-compose build
docker-compose up
- And finaly run tests
Here, I build and run a Docker container manually because, on Build Environment, like TeamCity or VSTS, we have build steps like Docker Build and Docker Compose.
So if I have to use these build steps in the build environment, I don't need to automate docker-compose build
and docker-compose up
locally.
In the next tutorial, I will show how to automate docker-compose build
and docker-compose up
using PowerShell. In this scenario, I will not need a container registry.
Source Code
The source code will be available soon on https://github.com/logcorner?tab=repositories
Opinions expressed by DZone contributors are their own.
Comments