How to run the project
1. Run the backend
$ cd backend
$ docker-compose up -d
$ poetry install
Before you move on, activate your virtual environment (VM). It can look something similar to the following, depending on your VM activation instructions. Replace <vm-folder>
with the folder name where you placed your VM in the project.
$ source <vm-folder>/venv/bin/activate
Following the VM activation, migrate data and run the server.
(venv) $ python manage.py migrate
(venv) $ python manage.py opensearch index rebuild
(venv) $ python manage.py opensearch document index
(venv) $ python manage.py runserver
After this, the backend must be visible on localhost (by default https://127.0.0.1:8000
.
Note: Your localhost might sit on a different domain. Act accordingly.
Do not be scared if the provided domain navigates you to a blank site. Three options are going to be displayed there:
-
/admin/
- Access the administrator site of the backend. -
/api/v1/
- Access the database of users and lectures. -
/api-auth/
- Login/Logout
In order to access the one of the subdomains listed above, put next to the localhost address the desired option, like this: https://127.0.0.1:8000/api/v1
.
2. Add users
Add a super admin user with username: admin
and password: 123456
.
$ python manage.py loaddata users.json
The backend can be accessed by the credentials present in users.json
file.
If you wish to modify/add the admin credentials, open backend/cds/users.json
then modify the existing ones or add new admin by providing another username
and password
.
3. Add lectures
For local usage only:
$ python manage.py loaddata lectures.json
$ python manage.py opensearch index rebuild
$ python manage.py opensearch document index
4. Run the UI
$ cd ui
$ yarn install
$ yarn start
5. Harvest lectures from CDS
- Harvest a specific date:
$ cd harvest
$ poetry install
$ scrapy crawl CDS -a "from_date=2021-09-01"
- Harvest all lectures up until now:
$ cd harvest
$ poetry install
$ scrapy crawl CDS -a "migrate_all=True"