# Endpoints | Method | URL Pattern | Handler | Action | |--------|-----------------|---------------------|--------------------------------------| | GET | /v1/healthcheck | healthCheckHandler | Show application information | | GET | /v1/movies | listMoviesHandler | Show the details of all movies | | POST | /v1/movies | createMoviesHandler | Create a new movie | | GET | /v1/movies/:id | showMovieHandler | Show the details of a specific movie | | PUT | /v1/movies/:id | editMovieHandler | Edit the details of a specific movie | | DELETE | /v1/movies/:id | deleteMovieHandler | Delete a specific movie | # Installation ## Launch API `go run ./cmd/api` If you want, you can also verify that the command-line flags are working correctly by specifying alternative **port** and **env** values when starting the application. When you do this, you should see the contents of the log message change accordingly. For example : `go run ./cmd/api -port=3030 -env=production` **time=2025-10-10T11:08:00.000+02:00 level=INFO msg= "starting server" addr=:3030 env=production** ## Test endpoints `curl -i localhost:4000/v1/healthcheck` The *-i* flag in the command above instructs curl to display the HTTP response headers as well as the response body. ### Result HTTP/1.1 200 OK Date: Mon, 05 Apr 2021 17:46:14 GMT Content-Length: 58 Content-Type: text/plain; charset=utf-8 status: available environment: development version: 1.0.0 ## API Versioning There are two comon approaches to doing this : 1. By prefixing all URLs with your API version, like **/v1/healthcheck** or **/v2/healthcheck** 2. By using custom **Accept** and **Content-Type** headers on requests and responses to convey the API version, like **Accept: application/vnd.greenlight-v1** From an HTTP semantics point of view, using headers to convey the API version is the 'purer' approach. But from a user-experience point of view, using a URL prefix is arguably better. It makes it possible for developers to see which version of the API is being used at a glance, and it also means that the API can still be explored using a regular web browser (which is harder if custom headers are required). ## SQL Migrations The first thing we need to do is generate a pair of _migration files_ using the **migrate create** command : ```bash migrate create -seq -ext=.sql -dir=./migrations create_movies_table ``` In this command: - The **-seq** flag indicates that we want to use sequential numbering like **0001, 0002, ...** for the migration files (instead of a Unix timestamp, which is the default). - The **-ext** flag indicates that we want to give the migration files the extension **.sql**. - The **-dir** flag indicates that we want to store the migration files in the **./migrations** directory (which will be created automatically if it doesn't already exist). - The name **create_movies_table** is a descriptive label that we give the migration files to signify their contents. ### Executing the migrations ```bash migrate -path=./migrations -database=$GREENLIGHT_DB_DSN up ``` --- Note: You may get the error: **error: pq: permission denied for schema public...** when running this command. It's because Postgres might revoke the **CREATE** permission from all users except a database ownser. To get around this, set the database owner to the **greenlight** user: ```sql ALTER DATABASE greenlight OWNER TO greelight; ``` If that still doesn't work, try explicitly granting the **CREATE** privileges to the **greenlight** user: ```sql GRANT CREATE ON DATABASE greenlight TO greelight; ``` --- The **schema_migrations** table is automatically generated by the **migrate** tool and used to keep track of which migrations have been applied. ``` greenlight => SELECT * FROM schema_migrations; version | dirty ----------------- 2 | f ``` The **version** column here indicates that our migration files up (and including) number **2** in the sequence have been executed against the database. The value of **dirty** column is **false**, which indicates that the migration files were cleanly executed _without any errors_ and the SQL statements they contain were successfully applied in _full_. You can run the **\d** meta command on the **movies** table to see the structure of the table and confirm the **CHECK** constraints were created correctly. ### Migrating to a specific version As an alternative to looking at the **schema_migrations** table, if you want to see which migration version your database is currently on, you can run the **migrate** tool's **version** command like so : ```bash $ migrate -path=./migrations -database=$EXAMPLE_DSN version 2 ``` You can also migrate up or down to a specific version by using the **goto** command: ```bash $ migrate -path=./migrations -database=$EXAMPLE_DSN goto 1 ``` ### Executing down migrations You can use the down command to roll-back by a specific number of migrations. For example, to rollback the _most recent migration_, you would run : ```bash $ migrate -path=./migrations -database=$EXAMPLE_DSN down 1 ``` Generally, prefer **goto** command to perform roll-backs (as it's more explicit about the target version) and reserve use of the **down** command for rolling-back _all migrations_, like so: ```bash $ migrate -path=./migrations -database=$EXAMPLE_DSN down Are you sure you want to apply all down migrations? [y/N] y Applying all down migrations 2/d create_bar_table (39.38729ms) 1/d create_foo_table (59.29829ms) ``` Another variant of this is the **drop** command, which will remove all tables from the database including the **schema_migrations** table - but the database itself will remain, [along with anything else that has been created](https://github.com/golang-migrate/migrate/issues/193) like sequences and enums. Because if this, using **drop** can leave your database in a messy and unknown state, and it's generally better to stick with the **down** command if you want to roll back everything. ### Fixing errors in SQL migrations When you run a migration that contains an error, all SQL statements up to the erroneous one will be applied and then the **migrate** tool will exit with a message describing the error. Similar to this : ```bash $ migrate -path=./migrations -database=$EXAMPLE_DSN up 1/u create_foo_table (39.38729ms) 2/u create_bar_table (78.29829ms) error: migration failed: syntax error at end of input in line 0: CREATE TABLE (details: pq syntax error at end of input) ``` If the migration file which failed contained multiple SQL statements, then it's possible that the migration file was **partially** applied before the error was encountered. In turn, this means that the database is in an unknown state as far as the **migrate** tool is concerned. Accordingly, the **version** field in the **schema_migrations** field will contain the number for the failed migration and the **dirty** field will be set to **true**. At this point, if you run another migration (**even a "down" migration**) you will get an error message similar to this: ```bash Dirty database version {X}. Fix and force version. ``` What you need to do is investigate the original error and figure out if the migration file which failed was partially applied. If it was, then you need to manually roll-back the partially applied migration. Once that's done, then you must also 'force' the version number in the schema_migrations table to the correct value. For example, to force the database version number to 1 you should use the force command like so : ```bash $ migrate -path=./migrations -database=$EXAMPLE_DSN force 1 ``` Once you force the version, the database is considered 'clean' and you should be able to run migrations again without any problem. ### Remote migration files The migrate tool also supports reading migration files from remote sources including Amazon S3 and GitHub repositories. For example : ```bash $ migrate -source="s3:///