diff --git a/astro.config.mjs b/astro.config.mjs index 2ef1007..1b71141 100644 --- a/astro.config.mjs +++ b/astro.config.mjs @@ -8,6 +8,17 @@ export default defineConfig({ integrations: [ starlight({ title: 'Tech Docs', + defaultLocale: 'root', + locales: { + root: { + label: 'English', + lang: 'en' + }, + de: { label: 'Deutsch' }, + es: { label: 'Español' }, + fr: { label: 'Français' }, + ja: { label: '日本語' } + }, routeMiddleware: './src/blogRouteData.js', logo: { dark: './src/images/logo-full-dark.svg', diff --git a/src/content/docs/de/404.md b/src/content/docs/de/404.md new file mode 100644 index 0000000..976d1cc --- /dev/null +++ b/src/content/docs/de/404.md @@ -0,0 +1,9 @@ +--- +title: '404' +editUrl: false +lastUpdated: false +tableOfContents: false +hero: + title: '404' + tagline: Page not found. Check the URL or try searching for what you were looking for. +--- diff --git a/src/content/docs/de/about/authoring.md b/src/content/docs/de/about/authoring.md new file mode 100644 index 0000000..3b2b1c8 --- /dev/null +++ b/src/content/docs/de/about/authoring.md @@ -0,0 +1,308 @@ +--- +title: Authoring content +description: This page outlines best practices for updating and writing markdown files for the tech-docs repository. +--- + +The Tech Docs site contains two types of content--documentation pages and blog posts. Both content types are written in [Markdown](https://en.wikipedia.org/wiki/Markdown) and define page-specific details as [yaml](https://yaml.org/) key:value pairs. + +Tech Docs uses [GitHub-flavored Markdown](https://github.github.com/gfm/), a variant of Markdown syntax, and [SmartyPants](https://daringfireball.net/projects/smartypants/), a typographic punctuation plugin. These tools provide authors niceties like generating clickable links from text, creating lists and tables, formatting for quotations and em-dashes, and more. + +## Where pages go + +### Documentation pages + +Documentation pages live under `src/content/docs/`. Each page is a `.md` or `.mdx` file. The URL path is `/` plus the file path relative to that directory, without the extension—for example, `src/content/docs/architecture/public.md` is served at `/architecture/public`. Nested folders add segments to the path. + +### Blog + +Blog posts live under `src/content/blog/` as `.md` or `.mdx` files. The URL is `/blog/` plus the path to the file relative to that folder, without the extension—for example, `src/content/blog/v4-2-0-release-candidate.md` is served at `/blog/v4-2-0-release-candidate`. Nested folders add path segments to the URL. + +Valid frontmatter and body content are required for the site to be built and published. + +## Markdown + +Common use of Markdown throughout Tech Docs includes: + +- [headings](#headings) +- [links](#links) +- [emphasizing text](#emphasizing-text) +- [paragraphs](#paragraphs) +- [lists](#lists) +- [code examples](#code-examples) +- [diagrams](#diagrams) +- [asides](#asides) +- [images](#images) + +### Headings + +Start a new line with between 2 and 6 `#` symbols, followed by a single space, and then the heading text. + +```md +## Example second-level heading +``` + +The number of `#` symbols corresponds to the heading level in the document hierarchy. **The first heading level is reserved for the page title** (available in the page [YAML frontmatter](#yaml-frontmatter)). Therefore the first _authored_ heading on every page should be a second level heading (`##`). + +:::note[Second level heading requirement] +Authored headings should start at the second level (`##`) on every page, since the first level (`#`) is reserved for the page title which is machine-generated. +::: + +```md + + +## Second level heading + +Notice the page starts with a second level heading. + +Notice the blank lines above and below each heading. + +### Third level heading + +This is demo text under the Third level heading section. + +#### Fourth level heading + +##### Fifth level heading + +###### Sixth and final level heading +``` + +### Links + +Create a link by wrapping the link text in brackets (`[ ]`) immediately followed by the external link URL, or internal link path, wrapped in parentheses (`( )`). + +```md +[text](URL or path) +``` + +Be sure not to include any space between the wrapped text and URL. + +```md + + +See the [TechDocs source code](https://github.com/archivesspace/tech-docs). +``` + +#### In documentation pages + +##### To other pages + +When linking to another Tech Docs documentation page, start with a forward slash (`/`), followed by the location of the page as found in the `src/content/docs/` directory, and omit the file extension (`.md`). + +```md +✅ [Public user interface](/architecture/public) + +❌ [Public user interface](architecture/public) +❌ [Public user interface](./architecture/public) +❌ [Public user interface](../architecture/public) +❌ [Public user interface](/architecture/public.md) +``` + +:::note[Internal link requirements] +Links to other Tech Docs documentation pages should: + +1. start with a forward slash (`/`) +2. reflect the location of the page as found in `src/content/docs/` +3. not include the file extension (`.md`) + +::: + +##### Within a page + +Starlight provides [automatic heading anchor links](https://starlight.astro.build/guides/authoring-content/#automatic-heading-anchor-links). To link to a section within a page, use the `#` symbol followed by the HTML `id` of the relevant section heading. + +```md + + +See the [Links](#links) section on this page. + +See the [Public configuration options](/architecture/public#configuration). +``` + +:::tip +A section heading's `id` is usually the same text string as the heading itself, but in all lowercase letters and with all single spaces converted to single hyphens. See the actual HTML `id` by right clicking on the heading to "inspect" it. +::: + +#### In blog posts + +When you write the body of a blog post, links to documentation pages use the same pattern as [in documentation pages](#to-other-pages): a leading `/` and the path under `src/content/docs/` without `.md`, for example `[Public user interface](/architecture/public)`. + +Links to another blog post use `/blog/` plus that post’s path under `src/content/blog/` without the extension—the same shape as its public URL (see [Blog](#blog) under [Where pages go](#where-pages-go)). For example, `src/content/blog/v4-2-0-release-candidate.md` is linked as `[v4.2.0 release candidate](/blog/v4-2-0-release-candidate)`. Nested folders add segments, for example `/blog/releases/v4-2-0` for `src/content/blog/releases/v4-2-0.md`. + +### Emphasizing text + +Wrap text to be emphasized with `_ ` for italics, `**` for bold, and `~~` for strikethrough. + +```md + + +_Italicized_ text + +**Bold** text + +**_Bold and italicized_** text + +~~Strikethrough~~ text +``` + +### Paragraphs + +Create paragraphs by leaving a blank line between lines of text. + +```md + + +This is one paragraph. + +This is another paragraph. +``` + +### Lists + +Precede each line in a list with a dash (`-`) for a bulleted list, or a number followed by a period (`1.`) for an ordered list. + +```md + + +- Resource +- Digital Object +- Accession + +1. Accession +2. Digital Object +3. Resource +``` + +### Code examples + +Wrap inline code with a single backtick (`` ` ``). + +Wrap code blocks with triple backticks (` ``` `), also known as a "code fence", placed just above and below the code. Append the name of the code's language or its file extension to the first set of backticks for syntax highlighting. + +````md + + +The `JSONModel` class is central to ArchivesSpace. + +```ruby +def h(str) + ERB::Util.html_escape(str) +end +``` +```` + +### Diagrams + +Tech Docs supports [Mermaid](https://mermaid.js.org/) diagrams in both documentation pages and blog posts. + +Use a fenced code block with `mermaid` as the language: + +````md +```mermaid +flowchart TD + A[Staff user edits record] --> B[Indexer updates Solr] + B --> C[Updated record in PUI] +``` +```` + +Rendered example: + +```mermaid +flowchart TD + A[Staff user edits record] --> B[Indexer updates Solr] + B --> C[Updated record in PUI] +``` + +### Asides + +Asides are useful for highlighting secondary or marketing information. + +Wrap content in a pair of triple colons (`:::`) and append one of the aside types (ie: `note`) to the first set of colons. The aside types are `note`, `tip`, `caution`, and `danger`, each of which have their own set of colors and icon. Customize the title by wrapping text in brackets (`[ ]`) placed after the note type. + +```md + + +:::tip +Become an ArchivesSpace member today! 🎉 +::: + +:::note[Some custom title] + +### Markdown is supported in asides + +![Pic alt text](../../../../images/example.jpg) + +Lorem ipsum dolor sit amet consectetur, adipisicing elit. +::: +``` + +:::note +Asides are a custom Markdown feature provided by the underlying [Starlight framework](https://starlight.astro.build/guides/authoring-content/#asides) that builds the Tech Docs. +::: + +:::tip[Customize the aside title] +Customize the the aside title by wrapping text in brackets (`[ ]`) after the note type, in this case `tip`. +::: + +### Images + +Show an image using an exclamation point (`!`), followed by the image's [alt text](https://en.wikipedia.org/wiki/Alt_attribute) (a brief description of the image) wrapped in brackets (`[ ]`), followed by the external URL, or internal path, wrapped in parentheses (`( )`). + +```md + + +![A dozen Krispy Kreme donuts in a box](https://example.com/donuts.jpg) + +![The ArchivesSpace logo](../../../../images/logo.svg) +``` + +:::note[Put images in `src/images`] +All internal images belong in the `src/images` directory. The relative path to images from this file is `../../../images`. +::: + +## YAML frontmatter + +Each content file starts with [YAML](https://yaml.org/) frontmatter: metadata in a block wrapped in triple dashes (`---`). Use the templates below so every field we rely on is set explicitly. For more on how the site build system reads these values, see [Documentation content collection and schema](/about/development#documentation-content-collection-and-schema) and [Blog content collection and schema](/about/development#blog-content-collection-and-schema) on the Development page. + +### Documentation pages + +```md +--- +title: Using MySQL +description: Instructions for how to set up MySQL with ArchivesSpace. +--- +``` + +- **`title`** — Page title shown in the layout, browser tab, and metadata. +- **`description`** — Short summary used for SEO, search, and social previews. + +### Blog posts + +```md +--- +title: v4.2.0 Release Candidate +metaDescription: Early access to ArchivesSpace v4.2.0-RC1 is now available. +teaser: ArchivesSpace v4.2.0-RC1 has landed for early testing. +pubDate: 2026-03-20 +authors: + - Pat Doe +updatedDate: 2026-03-21 +--- +``` + +- **`title`** — Post headline on the post page and on the blog index. +- **`metaDescription`** — Short summary for page metadata (SEO) and for the index card when `teaser` is omitted. +- **`teaser`** — Text or HTML for the blog index card (links and light markup are common here). +- **`pubDate`** — Publication date; posts are ordered by this value, newest first. Use an ISO-style date (`YYYY-MM-DD`). +- **`authors`** — List of author names, shown comma-separated on the index and post page. +- **`updatedDate`** — Last-updated date in the same `YYYY-MM-DD` form when the post is revised after publication. + +## Image files + +All internal image files used in Tech Docs content should go in the `src/images` directory, located at the root of this project. + +## Writing conventions + +- Plugins, not plug-ins +- Titles are sentence-case ("Application monitoring with New Relic") +- Documentation page titles prefer '-ing' verb forms ("Using MySQL", "Serving over HTTPS") diff --git a/src/content/docs/de/about/development.md b/src/content/docs/de/about/development.md new file mode 100644 index 0000000..40771f9 --- /dev/null +++ b/src/content/docs/de/about/development.md @@ -0,0 +1,318 @@ +--- +title: Development +description: This page describes how to set up the tech-docs repostory, build the website, update dependencies, and run tests +# This is the last page in the sidebar, so point to Home next instead of +# the Help Center which comes after this page in the sidebar +next: + link: / + label: Home +--- + +Tech Docs is a [Node.js](https://nodejs.org) application, built with [Astro](https://astro.build/) and its [Starlight](https://starlight.astro.build/) documentation site framework. The source code is hosted on [GitHub](https://github.com/archivesspace/tech-docs). The site is statically built and (temporarily) hosted via [Cloudflare Pages](https://pages.cloudflare.com/). Content is written in [Markdown](/about/authoring#markdown). When the source code changes, a new set of static files are generated and published shortly after. + +## Dependencies + +Tech Docs depends on the following open source software (see `.nvmrc` and `package.json` for versions): + +1. [Node.js](https://nodejs.org) - JavaScript development and build environment; the version noted in `.nvmrc` reflects the default version of Node.js in the Cloudflare Pages build image +2. [Astro](https://astro.build/) - Static site generator conceptually based on "components" (React, Vue, Svelte, etc.) rather than "templates" (Jekyll, Handlebars, Pug, etc.) + 1. [Starlight](https://starlight.astro.build/) - Astro plugin and theme for documentation websites + 2. [Sharp](https://sharp.pixelplumbing.com/) - Image transformation library used by Astro +3. [Cypress](https://www.cypress.io/) - End-to-end testing framework +4. [Stylelint](https://stylelint.io/) - CSS linter used locally in text editors and remotely in [CI](#cicd) for testing + 1. [stylelint-config-recommended](https://github.com/stylelint/stylelint-config-recommended) - Base set of lint rules + 2. [postcss-html](https://github.com/ota-meshi/postcss-html) - PostCSS syntax for parsing HTML (and HTML-like including .astro files) + 3. [stylelint-config-html](https://github.com/ota-meshi/stylelint-config-html) - Allows Stylelint to parse .astro files +5. [Prettier](https://prettier.io/) - Source code formatter used locally in text editors and remotely in [CI](#cicd) for testing + 1. [prettier-plugin-astro](https://github.com/withastro/prettier-plugin-astro) - Allows Prettier to parse .astro files via the command line + +## Local development + +Run Tech Docs locally by cloning the Tech Docs repository, installing project dependencies, and spinning up a development server: + +```sh +# Requires git and Node.js + +# Clone Tech Docs and move to it +git clone https://github.com/archivesspace/tech-docs.git +cd tech-docs + +# Install dependencies +npm install + +# Run dev server +npm start +``` + +Now go to [localhost:4321](http://localhost:4321) to see Tech Docs running locally. Changes to the source code will be immediately reflected in the browser. + +### Building the site + +Building the site creates a set of static files, found in `dist` after build, that can be served locally or deployed to a server. Sometimes building the site surfaces errors not found in the development environment. + +```sh +# Build the site and output it to dist/ +npm run build +``` + +:::tip +Serve the built output by running `npm run preview` after a build. +::: + +### Available `npm` scripts + +The following scripts are made available via `package.json`. Invoke any script on the command line from the project root by prepending it with the `npm run` command, ie: `npm run start`. + +- `start` -- run Astro dev server +- `build` -- build Tech Docs for production +- `preview` -- serve the static build +- `astro` -- get Astro help +- `test:dev` -- run tests in development mode +- `test:prod` -- run tests in production mode +- `test` -- defaults to run tests in production mode +- `prettier:check` -- check formatting with Prettier +- `prettier:fix` -- fix possible format errors with Prettier +- `stylelint:check` -- lint CSS with Stylelint +- `stylelint:fix` -- fix possible CSS lint errors with Stylelint + +## Documentation pages + +Documentation pages are implemented with Starlight’s `docs` content collection. Source files are in `src/content/docs/`, and Starlight generates their routes as part of the normal Astro static build output (no separate docs build step). Sidebar hierarchy is configured in `src/siteNavigation.json`. For copy-paste templates and short author-facing field guidance, see [YAML frontmatter](/about/authoring#yaml-frontmatter). + +### Adding documentation pages + +To add a new documentation page: + +1. Create a Markdown file in the appropriate docs section directory under `src/content/docs/`. +2. Add that page to `src/siteNavigation.json` in the correct section and in the correct order so it appears in the sidebar navigation as desired. +3. If the new page becomes the first page in its section, update the corresponding homepage hero link in `src/components/HomePage.astro` so the section link points to the new first page. + +### Legacy `index.md` pages + +Some section directories still contain legacy `index.md` pages from the old Tech Docs site. Those pages can still be routed (for example `/architecture` and `/architecture/index`), but they are not included in the sidebar since they are not listed in `src/siteNavigation.json`. + +### Documentation content collection and schema + +In `src/content.config.ts`, the `docs` collection uses `docsLoader()` and [Starlight’s frontmatter schema](https://starlight.astro.build/reference/frontmatter/) via `docsSchema()`, extended with `issueUrl` and `issueText`. Frontmatter is validated at build time. Starlight requires a `title`; other keys are optional unless your page has a specific need. + +| Field | Required | Purpose | +| ----------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `title` | Yes | Page title in the layout, browser tab, and metadata. | +| `description` | No | Short summary for SEO, search, and social previews. Most pages set this; it is omitted on a few pages (for example [Staff interface](/architecture/frontend), [404](/404)). | +| `slug` | No | Overrides the URL segment instead of deriving it from the file path. | +| `editUrl` | No | Overrides the “Edit page” URL, or `false` to hide the link (for example on [404](/404)). | +| `head` | No | Extra tags for the document head (meta, link, custom title, etc.). | +| `tableOfContents` | No | Table of contents: `false` to hide, or `{ minHeadingLevel, maxHeadingLevel }` to tune range. | +| `template` | No | Starlight layout template (for example `splash`). | +| `hero` | No | Hero area for splash-style pages (`title`, `tagline`, optional `image`, `actions`, etc.). | +| `banner` | No | Optional banner above the page content. | +| `lastUpdated` | No | Override the displayed last-updated date, or `false` to hide it. | +| `prev` | No | Previous pagination link: `false`, a string label, or `{ link, label }`. | +| `next` | No | Next pagination link: `false`, a string label, or `{ link, label }`. For example, [Development](/about/development) sets this so “next” goes to Home instead of the external Help Center entry after it in the sidebar. | +| `pagefind` | No | Set `false` to omit the page from the Pagefind index. | +| `draft` | No | When `true`, exclude the page from production builds. | +| `sidebar` | No | Per-page sidebar label, order, badge, `hidden`, or link `attrs`. The main sidebar structure is configured in `src/siteNavigation.json`. | +| `issueUrl` | No | URL for the footer “report an issue” link, or `false` to hide it. Defaults in `src/content.config.ts` when omitted; authors may set explicitly (see [YAML frontmatter](/about/authoring#yaml-frontmatter)). | +| `issueText` | No | Label text for that footer link. Defaults in `src/content.config.ts` when omitted; authors may set explicitly (see [YAML frontmatter](/about/authoring#yaml-frontmatter)). | + +### Documentation routes + +- URLs are derived from file paths in `src/content/docs/` unless `slug` is set in frontmatter. +- Previous/next pagination is derived from sidebar order unless `prev`/`next` are overridden in frontmatter. + +### Documentation UI components + +| Area | Location | +| -------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- | +| Sidebar hierarchy and grouping | `src/siteNavigation.json` | +| Default docs page title rendering | `src/components/CustomPageTitle.astro` (falls back to Starlight’s default `PageTitle` for non-blog routes) | +| Footer metadata/navigation (edit link, issue link, etc.) | `src/components/overrides/Footer.astro`, `src/components/overrides/EditLink.astro`, `src/components/IssueLink.astro` | + +### Documentation tests + +Documentation-page behavior is covered in Cypress, mainly `cypress/e2e/content-pages.cy.js` (sidebar, table of contents, footer metadata links, and pagination). + +## Blog + +The [blog](/blog) is implemented as an Astro content collection alongside the docs collection. Post source files are in `src/content/blog/`; routes live under `src/pages/blog/`. There is no separate blog build step—blog pages are part of the normal Astro static output, and site search ([Search](#search)) indexes them like other HTML. For where to put files and example frontmatter, see [Authoring content](/about/authoring#where-pages-go) and [YAML frontmatter](/about/authoring#yaml-frontmatter). + +### Adding blog posts + +To add a new blog post, create a new Markdown file in `src/content/blog/` with the required frontmatter fields (`title`, `metaDescription`, `pubDate`, and `authors`). + +Optional fields (`teaser` and `updatedDate`) can also be added as needed. No `src/siteNavigation.json` changes are required for blog posts; valid files in the collection are included automatically when the site builds. + +### Blog content collection and schema + +The `blog` collection is registered in `src/content.config.ts` with a Zod schema. Frontmatter is validated at build time. Adding or renaming frontmatter fields requires updating that schema and every consumer of `entry.data` (blog pages, middleware, and tests). + +| Field | Required | Purpose | +| ----------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `title` | Yes | Post headline on the post page and index card. May include HTML for display; the document `` and prev/next pagination labels **strip HTML** from `title`. | +| `metaDescription` | Yes | Short summary for page meta description (SEO). Used as the index teaser text when `teaser` is omitted. | +| `teaser` | No | HTML or plain text for the blog index card (`set:html`). Prefer this for links or light HTML on the index; plain text in `title` is safest where tab titles and pagination matter. | +| `pubDate` | Yes | Publication date; posts are sorted by this field, newest first. Parsed from frontmatter and formatted for display in **UTC** on the index and post header. | +| `authors` | Yes | Array of author display names; shown comma-separated on the index and post page. | +| `updatedDate` | No | Optional revision date (`YYYY-MM-DD`). Stored in frontmatter but **not shown in the UI** today; useful for future display or consistency with the authoring template. | + +### Blog routes + +- `src/pages/blog/index.astro` — `/blog` index; loads posts, sorts by `pubDate` descending, passes data to the index UI. +- `src/pages/blog/[id].astro` — individual posts; `getStaticPaths` comes from the collection, so new valid posts appear on the next build. + +### Blog route middleware + +`src/blogRouteData.js` is Starlight route middleware for blog routes. It injects `pubDate`, `authors`, and `postTitle` for post pages and sets prev/next pagination (older post as “Previous,” newer as “Next”). Pagination labels use titles with HTML stripped. + +### Blog UI components + +| Area | Location | +| ------------------------------------ | ----------------------------------------------------------------------------- | +| Index list and cards | `src/components/BlogIndex.astro` | +| Index page title | `src/components/BlogIndexTitleHeader.astro` | +| Post title, date, authors, back link | `src/components/BlogPostTitleHeader.astro`, `src/components/BackToBlog.astro` | +| Default vs blog title | `src/components/CustomPageTitle.astro` | +| Header “Blog” link | `src/components/overrides/Header.astro` | +| Blog layout / sidebar behavior | `src/components/overrides/PageFrame.astro` | + +### Blog tests + +End-to-end coverage is in `cypress/e2e/blog.cy.js`. Update these tests when you change blog markup, URLs, or visible behavior. + +## Search + +Site search is a [Starlight feature](https://starlight.astro.build/guides/site-search/): + +> By default, Starlight sites include full-text search powered by [Pagefind](https://pagefind.app/), which is a fast and low-bandwidth search tool for static sites. +> +> No configuration is required to enable search. Build and deploy your site, then use the search bar in the site header to find content. + +:::note +Search only runs in production builds not in the dev server. +::: + +## Theme customization + +Starlight can be customized in various ways, including: + +- [Settings](https://starlight.astro.build/guides/customization/) -- see `astro.config.mjs` +- [CSS](https://starlight.astro.build/guides/css-and-tailwind/) -- see `src/styles/custom.css` +- [Components](https://starlight.astro.build/guides/overriding-components/) -- see `src/components` + +## Static assets + +### Images + +Most image files should be stored in `src/images`. This allows for [processing by Astro](https://docs.astro.build/en/guides/images/) which includes performance optimizations. + +Images that should not be processed by Astro, like favicons, should be stored in `public`. + +:::note[Use `src/images` for all content images] +Put all images used in Tech Docs content in `src/images`. +::: + +### The `public` directory + +Files placed in `public` are not processed by Astro. They are copied directly to the output and made available from the root of the site, so `public/favicon.svg` becomes available at `docs.archivesspace.org/favicon.svg`, while `public/example/slides.pdf` becomes available at `docs.archivesspace.org/example/slides.pdf`. + +## Mermaid diagrams + +Tech Docs supports Mermaid diagrams in both docs and blog content (for authoring syntax, see [Authoring content](/about/authoring#diagrams)). Mermaid is a text-to-diagram tool: authors write diagram definitions in a code fence, and Mermaid turns that text into SVG diagrams in the browser. This differs from regular fenced code blocks that Starlight renders with [Expressive Code](https://expressive-code.com/) as static syntax-highlighted code snippets. + +### Implementation + +1. Runtime logic lives in `src/lib/mermaid.ts`. +2. The runtime is loaded by the Starlight page frame override in `src/components/overrides/PageFrame.astro`. +3. Mermaid fences are post-processed at runtime and rendered as SVG diagrams. + +### Theme behavior + +- Mermaid theme is derived from the site theme (`data-theme` on `<html>`): + - dark mode => Mermaid `dark` + - non-dark modes => Mermaid `default` +- A `MutationObserver` in `src/lib/mermaid.ts` watches for `data-theme` changes and re-renders existing Mermaid diagrams so colors update after theme toggles. +- Mermaid text color is explicitly set in `initializeMermaidRuntime()` bor improved accessibility over its default styles: + - dark mode text: `#fff` + - light mode text: `#000` + +### Maintenance notes + +- If Starlight/Expressive Code markup changes in a future upgrade, update Mermaid selectors/parsing in `src/lib/mermaid.ts` (especially `pre[data-language="mermaid"]` and `.ec-line .code`). +- If layout-level script loading changes, keep `src/components/overrides/PageFrame.astro` loading `src/lib/mermaid.ts` on pages where markdown content appears. +- Keep Cypress coverage updated in `cypress/e2e/mermaid.cy.js` when Mermaid rendering behavior or markup changes. + +## Update npm dependencies + +Run the following commands locally to update the npm dependencies, then push the changes upstream. + +```sh +# List outdated dependencies +npm outdated + +# Update dependencies +npm update +``` + +## Import aliases + +Astro supports [import aliases](https://docs.astro.build/en/guides/imports/#aliases) which provide shortcuts to writing long relative import paths. + +```astro title="src/components/overrides/Example.astro" del="../../images" ins="@images" +--- +import relativeA from '../../images/A_logo.svg' // no alias +import aliasA from '@images/A_logo.svg' // alias +--- +``` + +## Sitemap + +Starlight has built-in [sitemap support](https://starlight.astro.build/guides/customization/#enable-sitemap) which is enabled via the top-level `site` key in `astro.config.mjs`. This key generates `/sitemap-index.xml` and `/sitemap-0.xml` when Tech Docs is [built](#building-the-site), and adds the sitemap link to the `<head>` of every page. `public/robots.txt` also points to the sitemap. + +## Testing + +### End-to-end + +Tech Docs uses [Cypress](https://www.cypress.io/) for end-to-end testing customizations made to the underlying Starlight framework and other project needs. End-to-end tests are located in `cypress/e2e`. + +Run the Cypress tests locally by first building and serving the site: + +```sh +# Build the site +npm run build + +# Serve the build output +npm run preview +``` + +Then **in a different terminal** initiate the tests: + +```sh +# Run the tests +npm test +``` + +### Code style + +Nearly all files in the Tech Docs code base get formatted by [Prettier](https://prettier.io/) to ensure consistent readability and syntax. Run Prettier locally to find format errors and automatically fix them when possible: + +```sh +# Check formatting of .md, .css, .astro, .js, .yml, etc. files +npm run prettier:check + +# Fix any errors that can be overwritten automatically +npm run prettier:fix +``` + +All CSS in .css and .astro files are linted by [Stylelint](https://stylelint.io/) to help avoid errors and enforce conventions. Run Stylelint locally to find lint errors and automatically fix them when possible: + +```sh +# Check all CSS +npm run stylelint:check + +# Fix any errors that can be overwritten automatically +npm run stylelint:fix +``` + +### CI/CD + +Before new changes are accepted into the code base, the [end-to-end](#end-to-end) and [code style](#code-style) tests need to pass. Tech Docs uses [GitHub Actions](https://docs.github.com/en/actions) for its continuous integration and continuous delivery (CI/CD) platform, which automates the testing and deployment processes. The tests are defined in yaml files found in `.github/workflows/` and are run automatically when new changes are proposed. diff --git a/src/content/docs/de/administration/backup.md b/src/content/docs/de/administration/backup.md new file mode 100644 index 0000000..688cf61 --- /dev/null +++ b/src/content/docs/de/administration/backup.md @@ -0,0 +1,160 @@ +--- +title: Backup and recovery +description: Steps, commands, and advice for setting up your ArchivesSpace MySQL database and Solr index. Backups will ensure recovery in case of error or failure. +--- + +## Using the docker configuration package + +### Database backups + +The [Docker configuration package](/administration/docker) includes a mechanism that performs periodic backups of your MySQL database, +using: [databacker/mysql-backup](https://github.com/databacker/mysql-backup). It is by default configured to perform +a dump every two hours. See [configuration](https://github.com/databacker/mysql-backup/blob/master/docs/configuration.md) for more options. + +The automatically created backups are located in the [`backups` directory](/administration/docker/) of the docker configuration package. + +#### When using Docker + +You can explicitly create a dump of your dockerized database while the docker containers are running using the command on your host system shell: + +```shell +docker exec mysql mysqldump -u root -p123456 archivesspace | gzip > /tmp/db.$(date +%F.%H%M%S).sql.gz +``` + +#### When using Docker Desktop + +You can explicitly create a dump of your dockerized database while the docker containers are running using the command on the "Exec" tab of your mysql container: + +```shell +docker exec mysql mysqldump -u root -p123456 archivesspace | gzip > /tmp/db.$(date +%F.%H%M%S).sql.gz +``` + +You can then export the created database dump from the `/tmp` directory of your mysql container using the "Files" tab. + +## Managing your own backups + +Performing regular backups of your MySQL database is critical. ArchivesSpace stores +all of your records data in the database, so as long as you have backups of your +database then you can always recover from errors and failures. + +If you are running MySQL, the `mysqldump` utility can dump the database +schema and data to a file. It's a good idea to run this with the +`--single-transaction` option to avoid locking your database tables +while your backups run. It is also essential to use the `--routines` +flag, which will include functions and stored procedures in the +backup. The `mysqldump` utility is widely used, and there are many tutorials +available. As an example, something like this in your `crontab` would backup your +database twice daily: + +```shell +# Dump archivesspace database 6am and 6pm +30 06,18 * * * mysqldump -u as -pas123 archivesspace | gzip > ~/backups/db.$(date +%F.%H%M%S).sql.gz +``` + +You should store backups in a safe location. + +If you are running with the demo database (NEVER run the demo database in production), +you can create periodic database snapshots using the following configuration settings: + +```ruby +# In this example, we create a snapshot at 4am each day and keep +# 7 days' worth of backups +# +# Database snapshots are written to 'data/demo_db_backups' by +# default. +AppConfig[:demo_db_backup_schedule] = "0 4 \* \* \*" +AppConfig[:demo\_db\_backup\_number\_to\_keep] = 7 +``` + +Solr indexes can always be [recreated](administration/indexes/) from the contents of the +database. For large sites, where recreating the indexes would take too long, it is possible to [backup and restore solr indexes](https://solr.apache.org/guide/solr/latest/deployment-guide/backup-restore.html). +In that case, you also need to backup and restore the files used by the indexers to mark which part of the data is already indexed: + +``` +docker cp archivesspace:/archivesspace/data/indexer_state /tmp/indexer_state +docker cp archivesspace:/archivesspace/data/indexer_pui_state /tmp/indexer_pui_state +``` + +## Creating backups of your database using the provided script + +ArchivesSpace provides simple scripts for windows and unix-like systems for backing up the database to a `.zip` file. + +### When using the embedded demo database + +Note: _NEVER use the demo database in production._. You can run: + +```shell +scripts/backup.sh --output /path/to/backup-yyyymmdd.zip +``` + +and the script will generate a file containing a snapshot of the demo database. + +### When using MySQL + +If you are running against MySQL and have `mysqldump` installed, you +can provide the `--mysqldump` option. This will read the +database settings from your configuration file and add a dump of your +MySQL database to the resulting `.zip` file. + +```shell +scripts/backup.sh --mysqldump --output ~/backups/backup-yyyymmdd.zip +``` + +## Recovering from backup + +When recovering an ArchivesSpace installation from backup, you will +need to restore your database (either the demo database or MySQL). + +After restoring your database, it is recommended to [recreate your solr indexes](administration/indexes/) + +### Recovering your database + +#### When managing your own MySQL + +If you are using MySQL, recovering your database just requires loading +your `mysqldump` backup into an empty database. If you are using the +`scripts/backup.sh` script (described above), this dump file is named +`mysqldump.sql` in your backup `.zip` file. + +To load a MySQL dump file, follow the directions in _Set up your MySQL +database_ to create an empty database with the appropriate +permissions. Then, populate the database from your backup file using +the MySQL client: + +```sql +`mysql -uas -p archivesspace < mysqldump.sql`, where + `as` is the user name + `archivesspace` is the database name + `mysqldump.sql` is the mysqldump filename +``` + +You will be prompted for the password of the user. + +#### When using the demo database + +If you are using the demo database, your backup `.zip` file will +contain a directory called `demo_db_backups`. Each subdirectory of +`demo_db_backups` contains a backup of the demo database. To +restore from a backup, copy its `archivesspace_demo_db` directory back +to your ArchivesSpace data directory. For example: + +```shell +cp -a /unpacked/zip/demo_db_backups/demo_db_backup_1373323208_25926/archivesspace_demo_db \ +/path/to/archivesspace/data/ +``` + +#### When running on Docker + +If you are using the Docker configuration package to run ArchivesSpace you can restore a database dump onto your `archivesspace` MySQL database with the following command on your host system shell: + +```shell +docker exec mysql mysql -uas -pas123 archivesspace < /tmp/db.2025-02-26.164907.sql +``` + +##### When using Docker Desktop + +On docker Desktop, you can import your sql file into the `/tmp/` directrory using the "Files" tab of your mysql container. Afterwards, on the "Exec" tab run the command: + +```shell +gunzip -c /tmp/db.2026-02-17.155254.sql.gz | mysql -u as -pas123 archivesspace +``` diff --git a/src/content/docs/de/administration/docker.md b/src/content/docs/de/administration/docker.md new file mode 100644 index 0000000..8488c78 --- /dev/null +++ b/src/content/docs/de/administration/docker.md @@ -0,0 +1,226 @@ +--- +title: Running with Docker +description: Instructions on setting up, running, and managing an ArchivesSpace installation using Docker. +--- + +## Docker images + +Starting with v4.0.0 ArchivesSpace officially supports using [Docker](https://www.docker.com/) as the easiest way to get up and running. Docker eases installing, upgrading, starting and stopping ArchivesSpace. It also makes it easy to setup ArchivesSpace as a system service that starts automatically on every reboot. + +If you prefer not to use Docker, another (more involved) way to get ArchivesSpace up and running is installing the latest [distribution `.zip` file](/getting_started/zip_distribution). + +ArchivesSpace Docker images are available on the [Docker hub](https://hub.docker.com/u/archivesspace). + +- main application images are built from [this Dockerfile](https://github.com/archivesspace/archivesspace/blob/master/Dockerfile) +- solr images are built from [this Dockerfile](https://github.com/archivesspace/archivesspace/blob/master/solr/Dockerfile) + +## Installing + +### System requirements + +ArchivesSpace on Docker has been tested on Ubuntu Linux, Mac OS X, and Windows. At least 1024 MB RAM are required. We recommend using at least 2 GB for optimal performance. + +### Software Dependencies + +When using Docker, the only software dependency is [Docker](https://www.docker.com/) itself. Follow the [instructions](https://docs.docker.com/get-started/get-docker/) to install the Docker engine. +Optionally installing [Docker Desktop](https://www.docker.com/products/docker-desktop/) provides a graphical way to manage, start and stop your docker containers, easily review the container logs etc. + +### Downloading the configuration package + +To run ArchivesSpace with Docker, first download the ArchivesSpace docker configuration package of the latest release from [github](https://github.com/archivesspace/archivesspace/releases) (scroll down to the "Assets" section of the latest release page and look for the zip file named `archivesspace-docker-${VERSION}.zip`). + +The downloaded configuration package contains a simple yet configurable and production ready docker-based setup intended to run on a single computer. + +### Contents of the configuration package + +Unzipping the downloaded file will create an `archivesspace` directory with the following contents: + +``` +. +├── backups +├── config +│ └── config.rb +├── locales +├── plugins +├── proxy-config +│ └── default.conf +├── sql +├── docker-compose.yml +├── stylesheets +└── .env +``` + +- The `backups` directory is first created once you start the application and it will contain the automatically performed backups of the database. See [Automated Backups section](#automated-database-backups). +- `config/config.rb` file contains the [main configuration](/customization/configuration/) of ArchivesSpace. +- The `locales` directory allows [customization of the UI text](/customization/locales/). +- The `plugins` directory is there to accommodate additional ArchivesSpace [plugins](/customization/plugins/). By default, it contains the [`local`](/customization/plugins/#adding-your-own-branding) and [`lcnaf`](https://github.com/archivesspace-plugins/lcnaf) plugins. +- `proxy-config/default.conf` contains the configuration of the bundled `nginx` see also [proxy configuration](#proxy-configuration). +- In the `sql` directory you can put your `.sql` database dump file to initialize the new database, see [next section](migrating-from-the-zip-distribution-to-docker). +- The `stylesheets` directory contain the files that are used to create PDFs and other files. +- `docker-compose.yml` contains all the information required by Docker to build and run ArchivesSpace. +- `.env` contains configuration of the docker containers including: + - Credentials used by archivespace to access its MySQL database. It is recommended to change the default root and user passwords to something safer. + - The database connection URI which should also be [updated accordingly](/customization/configuration/#database-config) after the database user password is updated in the step above. + +## Migrating from the zip distribution to docker + +If you are currently running ArchivesSpace using the zip file distribution, you can start using Docker instead. + +### Create a backup of your ArchivesSpace instance database + +Use `mysqldump` to create a dump of your MySQL database: + +```shell +mysqldump -uroot -p123456 -h 127.0.0.1 archivesspace > /tmp/db.$(date +%F.%H%M%S).sql +``` + +Follow the steps under the [Backup and recovery](/administration/backup/) section if you need more instructions on how create backups of your MySQL database. + +### Initialize and migrate the database on Docker + +Copy your `.sql` database dump file created above in the `sql` directory of your unzipped Docker configuration package. Make sure the filename includes the `.sql` extension. The file should be in plain text format (not zipped). +Docker will pick it up when it starts for the first time and restore the dump to your new database. + +If you created the dump on an earlier ArchivesSpace version, the system will apply any pending database migrations to upgrade your database to the ArchivesSpace version you are currently running on Docker. + +After the initial run you will want to remove that `.sql` file from the `sql` directory of your unzipped Docker configuration package. + +The docker configuration package already includes a configurable database backup mechanism for MySQL. Read more about it in the [backup and recovery section](/administration/backup/#using-the-docker-configuration-package). + +## Running + +### Resource limits + +We recommend allocating at least 2GB per container for optimal performance. If the host instance is devoted to running ArchivesSpace, it is advisable to configure no memory limit for Docker containers. + +When using Docker Desktop, a default memory limit is set to 50% of your host's memory. To increase the RAM and other resource limits when using Docker Desktop, see [the documentation](https://docs.docker.com/desktop/settings-and-maintenance/settings/#resources). + +When using Docker without Docker Desktop, no memory limit is set by default. See [Docker documenentation](https://docs.docker.com/engine/containers/resource_constraints/) if you need to set limits to the resources used by ArchivesSpace containers. + +### Note on migrating from the zip distribution + +If migrating from the zip distribution to Docker, you most probably have local MySQL and Solr instances running. Starting ArchivesSpace with Docker will start Docker-based MySQL and Solr instances. In order to avoid port binding conflicts, make sure that you stop your local MySQL and Solr instances before proceeding. + +### Start + +Open a terminal, change to the `archivespace` directory that contains the `docker-compose.yml` file and run: + +```shell +docker compose up --detach +``` + +The first time you start ArchivesSpace with Docker, the container images will be downloaded and configuration steps such as database setup and solr index initialization will be performed automatically. +It is expected that the whole process takes up to ten or even more minutes depending on the power of your machine and internet connection speed. **Note** if you are migrating from using the zip distribution to Docker and have already copied a dump of your database in the `sql` directory, initialization of the database and indexing it in solr can take a long time depending on the size of your data. + +Starting with the `--detach` option allows closing the terminal without stopping ArchivesSpace. Viewing the logs of running ArchivesSpace containers is possible in [Docker Desktop](https://www.docker.com/products/docker-desktop/) or in a terminal with: + +```shell +docker compose logs --follow +``` + +Watch the logs for the welcome message: + +``` +2024-12-04 18:42:17 archivesspace | ************************************************************ +2024-12-04 18:42:17 archivesspace | Welcome to ArchivesSpace! +2024-12-04 18:42:17 archivesspace | You can now point your browser to http://localhost:8080 +2024-12-04 18:42:17 archivesspace | ************************************************************ +``` + +Using the default proxy configuration, the Public User interface becomes available at http://localhost/ and the Staff User Interface at: http://localhost/staff/ (default login with: admin / admin) + +You can see the status of your running containers with: + +``` +docker ps +``` + +Which will give a listing like this: + +``` +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +6cd7114c1796 nginx:1.21 "/docker-entrypoint.…" 26 hours ago Up 29 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp proxy +9ed453c46a9f archivesspace/archivesspace:4.0.0 "/archivesspace/star…" 26 hours ago Up 29 minutes (healthy) 8080-8081/tcp, 8089-8090/tcp, 8092/tcp archivesspace +ec71dd3030b7 databack/mysql-backup:latest "/entrypoint dump" 26 hours ago Up 29 minutes db-backup +8b74aa374ec8 archivesspace/solr:4.0.0 "docker-entrypoint.s…" 26 hours ago Up 29 minutes 0.0.0.0:8983->8983/tcp, :::8983->8983/tcp solr +d2cf634744fe mysql:8 "docker-entrypoint.s…" 26 hours ago Up 29 minutes 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp mysql +``` + +If you have also [Docker Desktop](https://www.docker.com/products/docker-desktop/) installed, you can use it to start, stop and manage the ArchivesSpace containers after they have been created for the first time. Docker Desktop does have a built in terminal window that can be used to run Docker commands. + +### Stop + +The following commands need to run from `archivespace` directory that contains the `docker-compose.yml` file. You can stop running containers (without deleting) them with the command: + +```shell +docker compose stop +``` + +They can be started again with: + +```shell +docker compose up --detach +``` + +### Start a shell within a container to run the provided scripts + +You can get a `bash` shell on the container running the archivespace application and run the any of the scripts in the scripts directory with: + +```shell +$ docker exec -it archivesspace bash +archivesspace@9ed453c46a9f:/$ cd archivesspace/scripts/ +archivesspace@9ed453c46a9f:/archivesspace/scripts$ ls +backup.bat backup.sh ead_export.bat ead_export.sh find-base.sh initialize-plugin.bat initialize-plugin.sh password-reset.bat password-reset.sh rb setup-database.bat setup-database.sh +archivesspace@9ed453c46a9f:/archivesspace/scripts$ ./setup-database.sh +NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.base/java.io=ALL-UNNAMED +Loading ArchivesSpace configuration file from path: /archivesspace/config/config.rb +Loading ArchivesSpace configuration file from path: /archivesspace/config/config.rb +Loading ArchivesSpace configuration file from path: /archivesspace/config/config.rb +Detected MySQL connector 8+ +Running migrations against jdbc:mysql://db:3306/archivesspace?useUnicode=true&characterEncoding=UTF-8&user=[REDACTED]&password=[REDACTED]&useSSL=false&allowPublicKeyRetrieval=true&serverTimezone=UTC +All done. +``` + +### Copy files from and to your data directory + +The archivespace `data` directory is not exposed in the Docker Configuration package (as are `locales`, `config`, and `locales` making them easily accessible). This is due to issues we have had on Windows when exposing +the `data` directory instead of using a Docker volume for it. + +If you need to copy files from/to the `data` directory, or any other directory of the archivesspace installation, you can use [`docker cp`](https://docs.docker.com/reference/cli/docker/container/cp/) commands, such as: + +```shell +docker cp archivesspace:/archivesspace/data/indexer_state /tmp/indexer_state +docker cp ~/Desktop/test.png archivesspace:/archivesspace/data +``` + +## Automated database backups + +The Docker configuration package includes a mechanism that will perform periodic backups of your MySQL database, see the [Backup and Recovery](/administration/backup/#backups-when-using-the-docker-configuration-package) for more information. + +## Proxy Configuration + +The Docker configuration package includes an `nginx` based proxy that is by default binding on port 80 of the host machine (see `NGINX_PORT` variable in `.env` file). See `proxy-config/default.conf` and the [nginx docker page](https://hub.docker.com/_/nginx) for more configuration options. + +## Upgrading + +If you are already using the Docker configuration package and upgrading to a newer ArchivesSpace version, [download and extract](#downloading-the-configuration-package) the latest version of the Docker configuration package. + +### With solr configuration / schema changes + +If the ArchivesSpace version you are upgrading to includes solr configuration or schema changes (see the [release notes](https://github.com/archivesspace/archivesspace/releases)), then you need to recreate your solr core and re-index. Change to the `archivespace` directory where you extraced the fresh downloaded Docker configuration package and run: + +```shell +docker compose down solr app +docker volume rm archivesspace_app-data archivesspace_solr-data +docker compose pull +docker compose up -d --build --force-recreate +``` + +### Without solr configuration / schema changes + +If no solr configuration or schema changes are included, change to the extracted `architecture` directory and run: + +```shell +docker compose pull +docker compose up -d --build --force-recreate +``` diff --git a/src/content/docs/de/administration/getting_started.mdx b/src/content/docs/de/administration/getting_started.mdx new file mode 100644 index 0000000..5572750 --- /dev/null +++ b/src/content/docs/de/administration/getting_started.mdx @@ -0,0 +1,143 @@ +--- +title: Getting started +description: Detailed hardware and software requirements for running ArchivesSpace, including instructions on setting up and running an ArchivesSpace instance using the latest distribution .zip file. +--- + +import LatestReleaseBlurb from '@components/LatestReleaseBlurb.astro' + +## The latest release + +<LatestReleaseBlurb /> + +## Two installation methods + +There are two different ways to install ArchivesSpace: + +- Using Docker +- Using the `.zip` file distribution + +### Using Docker + +See the [Running with Docker](/administration/docker/) page for instructions on how to install ArchivesSpace using Docker. + +Starting with ArchivesSpace v4.0.0, the easiest and recommended way to get up and running is using Docker. This method eases installing, upgrading, starting, and stopping ArchivesSpace. It also makes it easy to setup ArchivesSpace as a system service that starts automatically on every reboot. + +### Using the `.zip` file distribution + +The older and more involved way is to install from the latest distribution `.zip` file as described below. + +#### System requirements + +##### Operating system + +ArchivesSpace is being tested on Ubuntu Linux, Mac OS X, and Windows. + +##### Memory + +At least 1024 MB RAM allocated to the application are required. We recommend using at least 2 GB for optimal performance. + +#### Software requirements + +When using the zip distribution, a Java runtime environment and a Solr instance are required. See [using Docker](/administration/docker/) to avoid these dependencies. + +##### Java Runtime Environment + +We recommend using [OpenJDK](https://openjdk.org/projects/jdk/). The following table lists the supported Java versions for each version of ArchivesSpace: + +| ArchivesSpace version | OpenJDK version | +| --------------------- | --------------- | +| ≤ v3.5.1 | 8 or 11 | +| v4.0.0 up to v4.1.1 | 11 or 17 | +| ≥ v4.2.0 | 17 or 21 | + +The Jruby version used in ArchivesSpace v4.2.0 is still compatible with java 11 we highly recommend using Java 17 or 21 as those are the Java versions ArchivesSpace v4.2.0 has been tested with. You can still use java 11 with v4.2.0 but the ArchivesSpace Program Team can provide support for environments using Java versions we have tested ArchivesSpace with (17 or 21). + +Note that in the next major release we expect to drop support for java 17 and only support java 21 and 25. + +##### Solr + +Up to ArchivesSpace v3.1.1, the zip file distribution includes an embedded Solr v4 instance, which is deprecated and not supported anymore. Use the Docker images provided on [ArchivesSpace Docker repository](https://hub.docker.com/orgs/archivesspace/repositories) and see also [using Docker](/administration/docker/) to avoid managing an external Solr instance. + +ArchivesSpace v3.2.0 or above requires an external Solr instance when running using the Zip distribution. The table below summarizes the supported Solr versions for each ArchivesSpace version: + +| ArchivesSpace version | External Solr version | +| --------------------- | ------------------------- | +| ≤ v3.1.1 | no external solr required | +| v3.2.0 up to v3.5.1 | 8 (8.11) | +| v4.0.0 up to v4.1.1 | 9 (9.4.1) | +| ≥ v4.2.0 | 9 (9.9.0) | + +Each ArchivesSpace version is tested for compatibility with the corresponding Solr version listed in the table above. Using the corresponding version of Solr is recommended as that version is being used during development and running the ArchivesSpace automated tests. + +If you need to use ArchivesSpace with an older version of Solr check the [release notes](https://github.com/archivesspace/archivesspace/releases) for any potential version compatibility issues. + +**Note: the ArchivesSpace Program Team can only provide support for Solr deployments +using the "officially" supported version with the standard configuration provided by +the application. Everything else will be treated as "best effort" community-led support.** + +See [Running with external Solr](/provisioning/solr) for more information on installing and upgrading Solr. + +##### Database + +While ArchivesSpace does include an embedded database, MySQL is required for production use. + +(While not officially supported by ArchivesSpace, some community members use MariaDB so there is some community support for version 10.4.10 only.) + +**The embedded database is for testing purposes only. You should use MySQL or MariaDB for any data intended for production, including data in a test instance that you intend to move over to a production instance.** + +All ArchivesSpace versions can run on MySQL version 5.x or 8.x. + +#### Install and run + +Download the distribution `.zip` for your version from [ArchivesSpace releases on GitHub](https://github.com/archivesspace/archivesspace/releases). + +Confirm a supported Java version is active on your PATH: + +```sh +java -version +``` + +Compare the output with [Java Runtime Environment](#java-runtime-environment). If needed, install a supported OpenJDK or point your environment at one (avoid using an unsupported newer Java as the default). + +Extract the `.zip`; it creates a directory named `archivesspace`. Before starting ArchivesSpace, finish provisioning: + +- [MySQL](/provisioning/mysql) +- JDBC driver: [Download MySQL Connector](/provisioning/mysql/#download-mysql-connector) +- External [Solr](/provisioning/solr) when your version requires it (ArchivesSpace v3.2.0 and later on the zip distribution; see [Solr](#solr)) + +**Do not proceed until MySQL and Solr (when required) are running.** + +Start ArchivesSpace from that directory. On Linux and macOS: + +```shell +cd /path/to/archivesspace +./archivesspace.sh +``` + +On Windows: + +```shell +cd \path\to\archivesspace +archivesspace.bat +``` + +This runs ArchivesSpace in the foreground (it stops when you close the terminal). By default, logs are written to `logs/archivesspace.out`. + +**Note:** On Windows, errors such as `unable to resolve type 'size_t'` or `no such file to load -- bundler` often mean the path to the `archivesspace` folder contains spaces. Use a path without spaces. + +##### Verify and sign in + +The first startup can take about a minute. Then confirm the services in a browser: + +- http://localhost:8089/ — backend +- http://localhost:8080/ — staff interface +- http://localhost:8081/ — public interface +- http://localhost:8082/ — OAI-PMH server +- http://localhost:8090/ — Solr admin console + +In the staff interface, sign in with the default administrator account: + +- Username: `admin` +- Password: `admin` + +Create a repository via **System** → **Manage repositories** (top right). From **System** you can manage users and other administration tasks. **Change the default `admin` password before production use.** diff --git a/src/content/docs/de/administration/index.md b/src/content/docs/de/administration/index.md new file mode 100644 index 0000000..91ff590 --- /dev/null +++ b/src/content/docs/de/administration/index.md @@ -0,0 +1,13 @@ +--- +title: Administration basics +description: Index of the administration pages for the tech-docs website. +--- + +- [Getting started](./getting_started) +- [Running ArchivesSpace as a Unix daemon](./unix_daemon) +- [Running ArchivesSpace as a Windows service](./windows) +- [Backup and recovery](./backup) +- [Re-creating indexes](./indexes) +- [Resetting passwords](./passwords) +- [Upgrading](./upgrading) +- [Log rotation](./logrotate) diff --git a/src/content/docs/de/administration/indexes.md b/src/content/docs/de/administration/indexes.md new file mode 100644 index 0000000..aef049f --- /dev/null +++ b/src/content/docs/de/administration/indexes.md @@ -0,0 +1,86 @@ +--- +title: Recreating indexes +description: Steps for performing soft reindexes and full reindexes of Solr, including internal and external Solr. +--- + +There are two strategies for reindexing ArchivesSpace: + +- soft reindex +- full reindex + +## Soft reindex + +A soft reindex updates the existing documents in Solr without directly +touching the actual index documents on the filesystem. This can be done +while the system is running and is suitable for most use cases. + +There are two common ways to perform a soft reindex: + +1. Delete indexer state files + +ArchivesSpace keeps track of what has been indexed by using the files +under `data/indexer_state` and `data/indexer_pui_state` (for the PUI). + +If these files are missing, the indexer assumes that nothing has been +indexed and reindexes everything. To force ArchivesSpace to reindex all +records, just delete the files in `/path/to/archivesspace/data/indexer_state` +and `/path/to/archivesspace/data/indexer_pui_state`. + +You also can do this selectively by record type, for example, to reindex +accessions in repository 2 delete the file called `2_accession.dat`. + +2. Bump `system_mtime` values in the database + +If you update a record's `system_mtime` it becomes eligible for reindexing. + +```sql +#reindex all resources +UPDATE resource SET system_mtime = NOW(); +#reindex resource 1 +UPDATE resource SET system_mtime = NOW() WHERE id = 1; +``` + +## Full reindex + +A full reindex is a complete rebuild of the index from the database. This +may be required if you are having indexer issues, in the case of index +corruption, or if called for by an upgrade owing to changes in ArchivesSpace's +Solr configuration. + +To perform a full reindex: + +### ArchivesSpace <= 3.1.0 (embedded Solr) + +- Shutdown ArchivesSpace +- Delete these directories: + - `rm -rf /path/to/archivesspace/data/indexer_state/` + - `rm -rf /path/to/archivesspace/data/indexer_pui_state/` + - `rm -rf /path/to/archivesspace/data/solr_index/` +- Restart ArchivesSpace + +### ArchivesSpace > 3.1.0 (external Solr) + +For external Solr there is a plugin that can perform all of the re-indexing steps: [aspace-reindexer](https://github.com/lyrasis/aspace-reindexer) + +Manual steps: + +- Shutdown ArchivesSpace +- Delete these directories: + - `rm -rf /path/to/archivesspace/data/indexer_state/` + - `rm -rf /path/to/archivesspace/data/indexer_pui_state/` +- Perform a delete all Solr query: + - `curl -X POST -H 'Content-Type: application/json' --data-binary '{"delete":{"query":"*:*" }}' http://${solrUrl}:${solrPort}/solr/archivesspace/update?commit=true` + - Windows PowerShell: + ``` + Invoke-RestMethod -Uri "http://localhost:8983/solr/archivesspace/update?commit=true" + -Method Post + -ContentType "application/json" + -Body '{"delete":{"query":"*:*"}}' + ``` +- Restart ArchivesSpace + +--- + +You can watch the [Tips for indexing ArchivesSpace](https://www.youtube.com/watch?v=yFJ6yAaPa3A) youtube video to see these steps performed. + +--- diff --git a/src/content/docs/de/administration/logrotate.md b/src/content/docs/de/administration/logrotate.md new file mode 100644 index 0000000..d96ce90 --- /dev/null +++ b/src/content/docs/de/administration/logrotate.md @@ -0,0 +1,28 @@ +--- +title: Log rotation +description: Details an example of how to set up log rotation, which helps keep the ArchivesSpace log file from growing excessively. +--- + +In order to prevent your ArchivesSpace log file from growing excessively, you can set up log rotation. How to set up log rotation is specific to your institution but here is an example logrotate config file with an explanation of what it does. + +`/etc/logrotate.d/` + +``` + /<install location>/archivesspace/logs/archivesspace.out { + daily + rotate 7 + compress + notifempty + missingok + copytruncate + } +``` + +this example configuration file: + +- rotates the logs daily +- keeps 7 days worth of logs +- compresses the logs so they take up less space +- ignores empty logs +- does not report errors if the log file is missing +- creates a copy of the original log file for rotation before truncating the contents of the original file diff --git a/src/content/docs/de/administration/passwords.md b/src/content/docs/de/administration/passwords.md new file mode 100644 index 0000000..088336b --- /dev/null +++ b/src/content/docs/de/administration/passwords.md @@ -0,0 +1,16 @@ +--- +title: Resetting passwords +description: How to run a script that resets a user's password within ArchivesSpace. +--- + +Under the `scripts` directory you will find a script that lets you +reset a user's password. You can invoke it as: + +``` +scripts/password-reset.sh theusername newpassword # or password-reset.bat under Windows +``` + +If you are running against MySQL, you can use this command to set a +password while the system is running. If you are running against the +demo database, you will need to shutdown ArchivesSpace before running +this script. diff --git a/src/content/docs/de/administration/unix_daemon.md b/src/content/docs/de/administration/unix_daemon.md new file mode 100644 index 0000000..ba8d9d3 --- /dev/null +++ b/src/content/docs/de/administration/unix_daemon.md @@ -0,0 +1,60 @@ +--- +title: Running as a Unix daemon +description: Steps for running ArchivesSpace in the background as a daemon using the startup script, and additional info on configuring startup/init settings. +--- + +The `archivesspace.sh` startup script doubles as an init script. If +you run: + +``` +archivesspace.sh start +``` + +ArchivesSpace will run in the background as a daemon (logging to +`logs/archivesspace.out` by default, as before). You can shut it down with: + +``` +archivesspace.sh stop +``` + +You can even install it as a system-wide init script by creating a +symbolic link: + +``` +cd /etc/init.d +ln -s /path/to/your/archivesspace/archivesspace.sh archivesspace +``` + +Note: By default ArchivesSpace will overwrite the log file when restarted. You +can change that by modifying `archivesspace.sh` and changing the `$startup_cmd` +to include double greater than signs: + +``` +$startup_cmd &>> \"$ARCHIVESSPACE_LOGS\" & +``` + +Then use the appropriate tool for your distribution to set up the +run-level symbolic links (such as `chkconfig` for RedHat or +`update-rc.d` for Debian-based distributions). + +Note that you may want to edit archivesspace.sh to set the account +that the system runs under, JVM options, and so on. + +For systems that use systemd you may wish to use a Systemd unit file for ArchivesSpace + +Something similar to this should work: + +``` +[Unit] +Description=ArchivesSpace Application +After=syslog.target network.target +[Service] +Type=forking +ExecStart=/path/to/your/archivesspace/archivesspace.sh start +ExecStop=/path/to/your/archivesspace/archivesspace.sh stop +PIDFile=/path/to/your/archivesspace/archivesspace.pid +User=archivesspace +Group=archivesspace +[Install] +WantedBy=multi-user.target +``` diff --git a/src/content/docs/de/administration/upgrading.md b/src/content/docs/de/administration/upgrading.md new file mode 100644 index 0000000..9c5376d --- /dev/null +++ b/src/content/docs/de/administration/upgrading.md @@ -0,0 +1,183 @@ +--- +title: Upgrading when using the zip distribution +description: Instructions on how to update ArchivesSpace. +--- + +If you have installed ArchivesSpace using the Docker Configuration Package, refer to [upgrading with Docker](/administration/docker/#upgrading). If you have installed ArchivesSpace using the zip distribution, read on! (In case you do not know what the difference is, see the [getting started page](/administration/getting_started/#two-ways-to-get-up-and-running)). + +You can upgrade most versions of ArchivesSpace to a later version using these general instructions. Typically you do not need to progress through other versions of ArchivesSpace to get to a later one, unless there are special considerations for a specific version. Special considerations for these versions are noted here and in release notes. + +- **[Special considerations when upgrading to v1.1.0](/administration/upgrading_1_1_0)** +- **[Special considerations when upgrading to v1.1.1](/administration/upgrading_1_1_1)** +- **[Special considerations when upgrading from v1.4.2 to 1.5.x (these considerations also apply when upgrading from 1.4.2 to any version through 2.0.1)](/administration/upgrading_1_5_0)** +- **[Special considerations when upgrading to 2.1.0](/administration/upgrading_2_1_0)** +- **[Changing to external Solr when upgrading to 3.2.0 or later versions](https://docs.archivesspace.org/provisioning/solr/).** + +## Create a backup of your ArchivesSpace instance + +You should make sure you have a working backup of your ArchivesSpace +installation before attempting an upgrade. Follow the steps +under the [Backup and recovery section](/administration/backup) to do this. + +## Unpack the new version + +It's a good idea to unpack a fresh copy of the version of +ArchivesSpace you are upgrading to. This will ensure that you are +running the latest versions of all files. In the examples below, +replace the lower case x with the version number updating to. For example, +1.5.2 or 1.5.3. + +For example, on Mac OS X or Linux: + +```shell +$ mkdir archivesspace-1.5.x +$ cd archivesspace-1.5.x +$ curl -LJO https://github.com/archivesspace/archivesspace/releases/download/v1.5.x/archivesspace-v1.5.x.zip +$ unzip -x archivesspace-v1.5.x.zip +``` + +( The curl step is optional and simply downloads the distribution from github. You can also +simply download the zip file in your browser and copy it to the directory ) + +On Windows, you can do the same by extracting ArchivesSpace into a new +folder you create in Windows Explorer. + +## Shut down your ArchivesSpace instance + +To ensure you get a consistent copy, you will need to shut down your +running ArchivesSpace instance now. + +## Copy your configuration and data files + +You will need to bring across the following files and directories from +your original ArchivesSpace installation: + +- the `data` directory (see **Indexes note** below) +- the `config` directory (see **Configuration note** below) +- your `lib/mysql-connector*.jar` file (if using MySQL) +- any plugins and local modifications you have installed in your `plugins` directory + +For example, on Mac OS X or Linux: + +```shell +$ cd archivesspace-1.5.x/archivesspace +$ cp -a /path/to/archivesspace-1.4.2/archivesspace/data/* data/ +$ cp -a /path/to/archivesspace-1.4.2/archivesspace/config/* config/ +$ cp -a /path/to/archivesspace-1.4.2/archivesspace/lib/mysql-connector* lib/ +$ cp -a /path/to/archivesspace-1.4.2/archivesspace/plugins/local plugins/ +$ cp -a /path/to/archivesspace-1.4.2/archivesspace/plugins/wonderful_plugin plugins/ +``` + +Or on Windows: + +``` +$ cd archivesspace-1.5.x\archivesspace +$ xcopy \path\to\archivesspace-1.4.2\archivesspace\data\* data /i /k /h /s /e /o /x /y +$ xcopy \path\to\archivesspace-1.4.2\archivesspace\config\* config /i /k /h /s /e /o /x /y +$ xcopy \path\to\archivesspace-1.4.2\archivesspace\lib\mysql-connector* lib /i /k /h /s /e /o /x /y +$ xcopy \path\to\archivesspace-1.4.2\archivesspace\plugins\local plugins\local /i /k /h /s /e /o /x /y +$ xcopy \path\to\archivesspace-1.4.2\archivesspace\plugins\wonderful_plugin plugins\wonderful_plugin /i /k /h /s /e /o /x /y +``` + +Note that you may want to preserve the logs file (`logs/archivesspace.out` +by default) from your previous installation--just in case you need to +refer to it later. + +### Configuration note + +Sometimes a new release of ArchivesSpace will introduce new +configuration settings that weren't present in previous releases. +Before you replace the distribution `config/config.rb` with your +original version, it's a good idea to review the distribution version +to see if there are any new configuration settings of interest. + +Upgrade notes will generally draw attention to any configuration +settings you need to set explicitly, but you never know when you'll +discover a new, exciting feature! Documentation might also refer to +uncommenting configuration options that won't be in your file if you +keep your older version. + +### Indexes note + +Sometimes a new release of ArchivesSpace will require a FULL reindex +which means you do not want to copy over anything from your data directory +to your new release. The data directory contains the indexes created by Solr. +Check the release notes of the new version for any details about reindexing and +the [recreating indexes section](/administration/indexes/) for instructions on recreating indexes. + +## Transfer your locales data + +If you've made modifications to your locales file ( en.yml ) with customized +labels, titles, tooltips, etc., you'll need to transfer those to your new +locale file. + +A good way to do this is to use a Diff tool, like Notepad++, TextMate, or just +Linux diff command: + +```shell +$ diff /path/to/archivesspace-1.4.2/locales/en.yml /path/to/archivesspace-1.5.x/archivesspace/locales/en.yml +$ diff /path/to/archivesspace-1.4.2/locales/enums/en.yml /path/to/archivesspace-v1.5.x/archivesspace/locales/enums/en.yml +``` + +This will show you the differences in your current locales files, as well as the +new additions in the new version locales files. Simply copy the values you wish +to keep from your old ArchivesSpace locales to your new ArchivesSpace locales/provisioning/solr/#copy-the-config-files +files. + +## Run the database migrations + +With everything copied, the final step is to run the database +migrations. This will apply any schema changes and data migrations +that need to happen as a part of the upgrade. To do this, use the +`setup-database` script for your platform. For example, on Mac OS X +or Linux: + +```shell +$ cd archivesspace-1.5.x/archivesspace +$ scripts/setup-database.sh +``` + +Or on Windows: + +```shell +$ cd archivesspace-1.5.x\archivesspace +$ scripts\setup-database.bat +``` + +## Solr configuration updates + +If the release you are upgrading to includes updates in the solr schema or other configuration files (see the release notes) +and you're using external Solr (required beginning with version 3.2.0), you will need to update the solr schema and configuration files +accordingly, by [copying the solr configuration files](/provisioning/solr/#copy-the-config-files) from the release package to your external solr configuration. +See also the [Full instructions for using external Solr with ArchivesSpace](/provisioning/solr). + +## If you've deployed to Tomcat + +The steps to deploy to Tomcat are esentially the same as in the +[archivesspace_tomcat](https://github.com/archivesspace-labs/archivesspace_tomcat) + +But, prior to running your setup-tomcat script, you'll need to be sure to clean out the +any libraries from the previous ASpace version from your Tomcat classpath. + + 1. Stop Tomcat + 2. Unpack your new version of ArchivesSpace + 3. Configure your MySQL database in the config.rb ( just like in the + install instructions ) + 4. Make sure all you other local configuration settings are in your + config.rb file ( check your Tomcat conf/config.rb file for your current + settings. ) + 5. Make sure you MySQL connector jar in the lib directory + 6. Run your setup-database script to migration your database. + 7. Delete all ASpace related jar libraries in your Tomcat's lib directory. These + will include the "gems" folder, as well as "common.jar" and some + [others](https://github.com/archivesspace/archivesspace/tree/master/common/lib). + This will make sure your running the correct version of the dependent + libraries for your new ASpace version. + Just be sure not to delete any of the Apache Tomcat libraries. + 8. Run your setup-tomcat script ( just like in the install instructions ). + This will copy all the files over to Tomcat. + 9. Start Tomcat + +## That's it! + +You can now start your new ArchivesSpace version as normal. diff --git a/src/content/docs/de/administration/upgrading_1_1_0.md b/src/content/docs/de/administration/upgrading_1_1_0.md new file mode 100644 index 0000000..868b49f --- /dev/null +++ b/src/content/docs/de/administration/upgrading_1_1_0.md @@ -0,0 +1,62 @@ +--- +title: Upgrading to 1.1.0 +description: Special considerations when upgrading from ArchivesSpace 1.0.9 or less to 1.1.0, including the option for an external Solr instance. +--- + +Additional upgrade considerations specific to this release. Refer to the [upgrade documentation](/administration/upgrading) for the standard instructions that apply in all cases. + +## External Solr + +--- + +In ArchivesSpace 1.0.9 the default ports configuration was: + +```ruby +AppConfig[:backend_url] = "http://localhost:8089" +AppConfig[:frontend_url] = "http://localhost:8080" +AppConfig[:solr_url] = "http://localhost:8090" +AppConfig[:public_url] = "http://localhost:8081" +``` + +With the introduction of the [optional external Solr instance](/provisioning/solr) functionality this has been updated to: + +```ruby +AppConfig[:backend_url] = "http://localhost:8089" +AppConfig[:frontend_url] = "http://localhost:8080" +AppConfig[:solr_url] = "http://localhost:8090" +AppConfig[:indexer_url] = "http://localhost:8091" # NEW TO 1.1.0 +AppConfig[:public_url] = "http://localhost:8081" +``` + +In most cases the default value for `indexer_url` will blend in seamlessly without you needing to take any action. However, if you modified the original values in your `config.rb` file you may need to update it. Examples: + +**You use a different ports sequence** + +```ruby +AppConfig[:indexer_url] = "http://localhost:9091" +``` + +**You run multiple ArchivesSpace instances on a single host** + +Under this deployment scenario you would have changed port numbers for some (or all) instances in each `config.rb` file, so set the `indexer_url` for each instance as described above. + +**You include hostnames** + +```ruby +AppConfig[:indexer_url] = "http://yourhostname:8091" +``` + +## Clustering + +--- + +In a clustered configuration you may need to edit `instance_[server hostname].rb` files: + +```ruby +{ + ... + :indexer_url => "http://[localhost|yourhostname]:8091", +} +``` + +--- diff --git a/src/content/docs/de/administration/upgrading_1_1_1.md b/src/content/docs/de/administration/upgrading_1_1_1.md new file mode 100644 index 0000000..1df7953 --- /dev/null +++ b/src/content/docs/de/administration/upgrading_1_1_1.md @@ -0,0 +1,58 @@ +--- +title: Upgrading to 1.1.1 +description: Instructions on how to resequence archival object and digital object components within the resource tree and details on a plugin to make PDFs available in the public interface. +--- + +Additional upgrade considerations specific to this release. Refer to the [upgrade documentation](/administration/upgrading) for the standard instructions that apply in all cases. + +## Resequencing of Archival Object & Digital Object Component trees + +--- + +There have been some scenarios in which archival objects and digital object components lose +some of the information used to order their hierarchy. This can result in issues in creation, +editing, or moving items in the tree, since there are database constraints to ensure uniqueness +of certain metadata elements. + +In order to ensure data integrity, there is now method to resequence the trees. This will +not reorder or edit the elements, but simply rebuild all the technical metadata used to establish +the ordering. + +To run the resequencing process, edit the config/config.rb file to have this line: + +```ruby +AppConfig[:resequence_on_startup] = true +``` + +and restart ArchivesSpace. This will trigger a rebuilding process after the application has +started. It's advised to let this rebuild process run its course prior to editing records. +This duration depends on the size of your database, which can take seconds ( for databases with +few Archival and Digital Objects ) to hours ( for databases with hundreds of thousands of records ). +Check your log file to see how the process is going. When it has finished, you should see the application +return to normal operation, generally with only indexer updates being recorded in the log file. + +After you've started ArchivesSpace, be sure to change the config.rb file to have the :resequence_on_startup +set to "false", since you will not need to run this process on every restart. + +## Export PDFs in the Public Interface + +--- + +A common request has been to have a PDF version of the EAD exported in the public application. +This has been a bit problematic, since EAD export has a rather large resource hit on the +database, which is only increased by the added process of PDF creation. We are currently +redesigning part of the ArchivesSpace backend to make PDF creation more user-friendly by +establishing a queue system for exports. + +In the meantime, Mark Cooper at Lyrasis has made a [ Public Metadata Formats plugin ](https://github.com/archivesspace-deprecated/aspace-public-formats) +that exposes certain metadata formats and PDFs in the public UI. This plugin has been included +in this release, but you will need to configure it to expose which formats you would like +to have exposed. Please read the plugin documentation on how to configure this. + +PLEASE NOTE: +Exporting large EAD resources with this plugin will most likely cause some problems. Long requests +will time out, since the server does not want to waste resources on long-running processes. +In addition, a large number of requests for PDFs can cause an increased load on the server. +Please be aware of these plugin issues and limitations before enabling it. + +--- diff --git a/src/content/docs/de/administration/upgrading_1_5_0.md b/src/content/docs/de/administration/upgrading_1_5_0.md new file mode 100644 index 0000000..fb5662a --- /dev/null +++ b/src/content/docs/de/administration/upgrading_1_5_0.md @@ -0,0 +1,147 @@ +--- +title: Upgrading to 1.5.0 +description: Upgrade instructions for upgrading from ArchivesSpace 1.4.2 or lower to 1.5.0, including details on the newest container management feature. +--- + +Additional upgrade considerations specific to this release, which also apply to upgrading from 1.4.2 or lower to any version through 2.0.1. Refer to the [upgrade documentation](/administration/upgrading) for the standard instructions that apply in all cases. + +## General overview + +The upgrade process to the new data model in 1.5.0 requires considerable data transformation and it is important for users to review this document to understand the implications and possible side-effects. + +A quick overview of the steps are: + +1. Review this document and understand how the upgrade will impact your data, paying particular attention to the [Preparation section](#preparation). +2. [Backup your database](/administration/backup). +3. No, really, [backup your database](/administration/backup). +4. It is suggested that [users start with a new solr index](/administration/indexes). To do this, delete the data/solr_index/index directory and all files in the data/indexer_state directory. The embedded version of Solr has been upgraded, which should result in a much more compact index size. +5. Follow the standard [upgrading instructions](/administration/upgrading). Important to note: The setup-database.sh|bat script will modify your database schema, but it will not move the data. If you are currently using the container management plugin you will need to remove it from the list of plugins in your config file prior to starting ArchivesSpace. +6. Start ArchivesSpace. When 1.5.0 starts for the first time, a conversion process will kick off and move the data into the new table structure. **During this time, the application will be unavailable until it completes**. Duration depends on the size of your data and server resources, with a few minutes for very small databases to several hours for very large ones. +7. When the conversion is done, the web application will start and the indexer will rebuild your index. Performance might be slower while the indexer runs, depending on your server environment and available resources. +8. Review the [output of the conversion process](#conversion) following the instructions below. How long it takes for the report to load will depend on the number of entries included in it. + +## Preparing for and Converting to the New Container Management Functionality + +With version 1.5.0, ArchivesSpace is adopting a new data model that will enable more capable and efficient management of the containers in which you store your archival materials. To take advantage of this improved functionality: + +- Repositories already using ArchivesSpace as a production application will need to upgrade their ArchivesSpace applications to the version 1.5.0. (This upgrade / conversion must be done to take advantage of any other new features / bug fixes in ArchivesSpace 1.5.0 or later versions.) +- Repositories not yet using ArchivesSpace in production but needing to migrate data from the Archivists’ Toolkit or Archon will need to migrate their data to version 1.4.2 of ArchivesSpace or earlier and then upgrade that version to version 1.5.0. (This can be done when your repository is ready to migrate to ArchivesSpace.) +- Repositories not yet using ArchivesSpace in production and not needing to migrate data from the Archivists’ Toolkit or Archon can start using Archivists 1.5.0 without the need of upgrading. (People in this situation do not need to read any further.) + +Converting the container data model in version 1.4.2 and earlier versions of ArchivesSpace to the 1.5.0 version has some complexity and may not accommodate all the various ways in which container information has been recorded by diverse repositories. As a consequence, upgrading from a pre-1.5.0 version of ArchivesSpace requires planning for the upgrade, reviewing the results, and, possibly, remediating data either prior to or after the final conversion process. Because of all the variations in which container information can be recorded, it is impossible to know all the ways the data of repositories will be impacted. For this reason, **all repositories upgrading their ArchivesSpace to version 1.5.0 should do so with a backup of their production ArchivesSpace instance and in a test environment.** A conversion may only be undone by reverting back to the source database. + +## Frequently Asked Questions + +_How will my data be converted to the new model?_ + +When your installation is upgraded to 1.5.0, the conversion will happen as part of the upgrade process. + +_Can I continue to use the current model for containers and not convert to the new model?_ + +Because it is such a substantial improvement (see the [new features list](#new-features-in-150) below), the new model is required for all using ArchivesSpace 1.5.0 and higher. The only way to continue using the current model is to never upgrade beyond 1.4.2. + +_What if I’m already using the container management plugin made available to the community by Yale University?_ + +Conversion of data created using the Yale container management plugin, or a local adaptation of the plugin, will also happen as part of the process of upgrading to 1.5.0. Some steps will be skipped when they are not needed. At the end of the process, the new container data model will be integrated into your ArchivesSpace and will not need to be loaded or maintained as a plugin. + +Those currently running the container management plugin will need to remove the container management plugin from the list in your config file prior to starting the conversion or a validation name error will occur. + +_I haven’t moved from Archivists’ Toolkit or Archon yet and am planning to use the associated migration tool. Can I migrate directly to 1.5.0?_ + +No, you must migrate to 1.4.2 or earlier versions and then upgrade your installation to 1.5.0 according to the instructions provided here. + +_What changes are being made to the previous model for containers?_ + +The biggest change is the new concept of top containers. A top container is the highest level container in which a particular instance is stored. Top containers are in some ways analogous to the current Container 1, but broken out from the entire container record (child and grandparent container records). As such, top containers enable more efficient recording and updating of the highest level containers in your collection. + +_How does ArchivesSpace determine what is a top container?_ + +During the conversion, ArchivesSpace will find all the Container 1s in your current ArchivesSpace database. It will then evaluate them as follows: + +- If containers have barcodes, one top container is created for each unique Container 1 barcode. +- If containers do not have barcodes, one top container is created for each unique combination of container 1 indicator and container type 1 within a resource or accession. +- Once a top container is created, additional instance records for the same container within an accession or resource will be linked to that top container record. + +## Preparation + +_What can I do to prepare my ArchivesSpace data for a smoother conversion to top containers?_ + +- If your Container 1s have unique barcodes, you do not need to do anything except verify that your data is complete and accurate. You should run a preliminary conversion as described in the Conversion section and resolve any errors. +- If your Container 1s do not have barcodes, but have a nonduplicative container identifier sequence within each accession or resource (e.g. Box 1, Box 2, Box 3), or the identifiers are only reused within an accession or resource for different types of containers (for example, you have a Box 1 through 10 and an Oversize Box 1 through 3) you do not need to do anything except verify that your data is complete and accurate. You should run a preliminary conversion as described in the Conversion section and resolve any errors. +- If your Container 1s do not have barcodes and you have parallel numbering sequences, where the same indicators and types are used to refer to different containers within the same accession or resource within some or all accessions or resources (for example, you have a Box 1 in series 1 and a different Box 1 in series 5) you will need to find a way to uniquely identify these containers. One option is to run this [barcoder plugin](https://github.com/archivesspace-plugins/barcoder) for each resource to which this applies. The barcoder plugin creates barcodes that combine the ID of the highest level archival object ancestor with the container 1 type and indicator. (The barcoder plugin is designed to run against one resource at a time, instead of against all resources, because not all resources in a repository may match this condition.) Once you’ve differentiated your containers with parallel number sequences, you should run a preliminary conversion as described in the Conversion section and resolve any errors. + +You do not need to make any changes to Container 2 fields or Container 3 fields. Data in these fields will be converted to the new Child and Grandchild container fields that map directly to these fields. + +If you use the current Container Extent fields, these will no longer be available in 1.5.0. Any data in these fields will be migrated to a new Extent sub-record during the conversion. You can evaluate whether this data should remain in an extent record or if it belongs in a container profile or other fields and then move it accordingly after the conversion is complete. + +_I have EADs I still need to import into ArchivesSpace. How can I get them ready for this new model?_ + +If you have a box and folder associated with a component (or any other hierarchical relationship of containers), you will need to add identifiers to the container element so that the EAD importer knows which is the top container. If you previously used Archivists' Toolkit to create EAD, your containers probably already have container identifiers. If your container elements do not have identifiers already, Yale University has made available an [XSLT transformation file](https://github.com/YaleArchivesSpace/xslt-files/blob/master/EAD_add_IDs_to_containers.xsl) to add them. You will need to run it before importing the EAD file into ArchivesSpace. + +## Conversion + +When upgrading from 1.4.2 (and earlier versions) to 1.5.0, the container conversion will happen as part of the upgrade process. You will be able to follow its progress in the log. Instructions for upgrading from a previous version of ArchivesSpace are available at [upgrade documentation](/administration/upgrading). + +Because this is a major change in the data model for this portion of the application, running at least one test conversion is very strongly recommended. Follow these steps to run the upgrade/conversion process: + +- Create a backup of your ArchivesSpace instance to use for testing. **IT IS ESSENTIAL THAT YOU NOT RUN THIS ON A PRODUCTION INSTANCE AS THE CONVERSION CHANGES YOUR DATA, and THE CHANGES CANNOT BE UNDONE EXCEPT BY REVERTING TO A BACKUP VERSION OF YOUR DATA PRIOR TO RUNNING THE CONVERSION.** +- Follow the upgrade instructions to unpack a fresh copy of the v 1.5.0 release made available for testing, copy your configuration and data files, and transfer your locales. +- **It is recommended that you delete your Solr index files to start with a fresh index** We are upgrading the version of Solr that ships with the application, and the upgrade will require a total reindex of your ArchivesSpace data. To do this, delete the data/solr_index/index directory and the files in data/indexer_state. +- Follow the upgrade instructions to run the database migrations. As part of this step, your container data will be converted to the new data model. You can follow along in the log. Windows users can open the archivesspace.out file in a tool like Notepad ++. Mac users can do a tail –f logs/archivesspace.out to get a live update from the log. +- When the test conversion has been completed, the log will indicate "Completed: existing containers have been migrated to the new container model." + +![Image of Conversion Log](../../../../images/ConversionLog.png) + +- Open ArchivesSpace via your browser and login. + Retrieve the container conversion error report from the Background Jobs area: +- Select Background Jobs from the Settings menu. + +![Image of Background Jobs](../../../../images/BackgroundJobs.png) + +- The first item listed under Archived Jobs after completing the upgrade should be container_conversion_job. Click View. + +![Image of Background Jobs List](../../../../images/BackgroundJobsList.png) + +- Under Files, click File to download a CSV file with the errors and a brief explanation. + +![Image of Files](../../../../images/Files.png) + +![Image of Error Report](../../../../images/ErrorReport.png) + +- Go back to your source data and correct any errors that you can before doing another test conversion. +- When the error report shows no errors, or when you are satisfied with the remaining errors, your production instance is ready to be upgraded. +- When the final upgrade/conversion is complete, you can move ArchivesSpace version 1.5.0 into production. + +_What are some common errors or anomalies that will be flagged in the conversion?_ + +- A container with a barcode has different indicators or types in different records. +- A container with a particular type and indicator sometimes has a barcode and sometimes doesn’t. +- A container is missing a type or indicator. +- Container levels are skipped (for example, there is a Container 1 and a Container 3, but no Container 2). +- A container has multiple locations. + +The conversion process can resolve some of these errors for you by supplying or deleting values as it deems appropriate, but for the most control over the process you will most likely want to resolve such issues yourself in your ArchivesSpace database before converting to the new container model. + +_Are there any known conversion issues?_ + +Due to a change in the ArchivesSpace EAD importer in 2015, some EADs with hierarchical containers not designated by a @parent attribute were turned into multiple instance records. This has since been corrected in the application, but we are working on a plugin (now available at [Instance Joiner Plugin](https://github.com/archivesspace-plugins/instance_joiner) that will enable you to turn these back into single instances so that subcontainers are not mistakenly turned into top containers. + +## New features in 1.5.0 + +**Top containers replace Container 1s.** Unlike Container 1s in the current version of ArchivesSpace, top containers in the upcoming version can be defined once and linked many times to various archival objects, resources, and accessions. + +**The ability to create container profiles and associate them with top containers.** Optional container profiles allow you to track information about the containers themselves, including dimensions. + +**Extent calculator.** In conjunction with container profiles, the new extent calculator allows you to easily see extents for accessions, resources, or resource components. Optionally, you can use the calculator to generate extent records for an accession, resource, or resource component. + +**Bulk operations for containers.** The Manage Top Containers area provides more efficient ways to work with multiple containers, including the ability to add or edit barcodes, change locations, and delete top containers in bulk. + +**The ability to "share" boxes across collections in a meaningful way.** You can define top containers separately from individual accessions and resources and access them from multiple accession and resource records. For example, this might be helpful for recording information about an oversize box that contains items from many collections. + +**The ability to store data that will help you synchronize between ArchivesSpace and item records in your ILS.** If your institution creates item records in its ILS for containers, you can now record that information within ArchivesSpace as well. + +**The ability to store data about the restriction status of material associated with a container.** You can now see at a glance whether any portion of the contents of a container is restricted. + +**Machine-actionable restrictions.** You will now have the ability to associate begin and end dates with "conditions governing access" and "conditions governing use" Notes. You'll also be able to associate a local restriction type for non-time-bound restrictions. This gives the ability to better manage and re-describe expiring restrictions. + +For more information on using the new features, consult the user manual, particularly the new section titled Managing Containers (available late April 2016). diff --git a/src/content/docs/de/administration/upgrading_2_1_0.md b/src/content/docs/de/administration/upgrading_2_1_0.md new file mode 100644 index 0000000..05b8e8e --- /dev/null +++ b/src/content/docs/de/administration/upgrading_2_1_0.md @@ -0,0 +1,30 @@ +--- +title: Upgrading to 2.1.0 +description: Instructions on upgrading to ArchivesSpace 2.1.0 if coming from 1.4.2 or below, Archivists' Toolkit or Archon, or if using an external Solr server, in addition to notes on rights statement data migration. +--- + +Additional upgrade considerations specific to this release. Refer to the [upgrade documentation](/administration/upgrading) for the standard instructions that apply in all cases. + +:::note +These considerations also apply when upgrading to any version past 2.1.0 from a version prior to 2.1.0. +::: + +## For those upgrading from 1.4.2 and lower + +Following the merge of the Container Management Plugin in 1.5.0, ArchivesSpace still retained the old container model and had a number of dependencies on it. This imposed unnecessary complexity and some performance degradation on the system. + +In this release all references to the old container model have been removed and the parts of the application that were dependent on it (for example, Imports and Exports) have been refactored to use the new container model. + +A consequence of this change is that if you are upgrading from ArchivesSpace version of 1.4.2 or lower, you will need to first upgrade to any version between 1.5.0 and 2.0.1 to run the container conversion. You will then be able to upgrade to 2.1.0. If you are already using any version of ArchivesSpace between 1.5.0 and 2.0.1, you will be able to upgrade directly to 2.1.0. + +## For those needing to migrate data from Archivists' Toolkit or Archon using the migration tools + +The migration tools are currently supported through version 1.4.2 only. If you want to migrate data to ArchivesSpace using one of these tools, you must migrate it to 1.4.2. From there you can follow the instructions for those upgrading from 1.4.2 and lower. + +## Data migrations in this release + +The rights statements data model has changed in 2.1.0. If you currently use rights statements, your data will be converted to the new model during the setup-database step of the upgrade process. We strongly urge you to backup your database and run at least one test upgrade before putting 2.1.0 into production. + +## For those using an external Solr server + +The index schema has changed with 2.1.0. If you are using an external Solr server, you will need to update the [schema.xml](https://github.com/archivesspace/archivesspace/blob/master/solr/schema.xml) with the newer version. If you are using the default Solr index that ships with ArchivesSpace, no action is needed. diff --git a/src/content/docs/de/administration/windows.md b/src/content/docs/de/administration/windows.md new file mode 100644 index 0000000..a34b237 --- /dev/null +++ b/src/content/docs/de/administration/windows.md @@ -0,0 +1,60 @@ +--- +title: Running as a Windows service +description: Instructions on how to set up ArchivesSpace as a Windows service. +--- + +Running ArchivesSpace as a Windows service requires some additional configuration. + +You can use Apache [procrun](http://commons.apache.org/proper/commons-daemon/procrun.html) to configure ArchivesSpace to run as a Windows service. We have provided a service.bat script that will attempt to configure procrun for you (under `launcher\service.bat`). + +To run this script, first you need to [download procrun](http://www.apache.org/dist/commons/daemon/binaries/windows/). +Extract the files and copy the prunsrv.exe and prunmgr.exe to your ArchivesSpace directory. + +To find the path to Java, "Start" > "Control Panel" > "Java", Select "Java" tab. You'll see the path there. It will look something like `C:\Program Files (x86)\Java` + +You also need to be sure that Java is in your system path and also to create `JAVA_HOME` as a global environment variable. +To add Java to your path, edit you %PATH% environment variable to include the directory of your java executable ( it will be something like `C:\Program Files (x86)\Java` ). To add `JAVA_HOME`, add a new system variable and put the directory where java was installed ( something like `C:\Program Files (x86)\Java` ). + +Environment variables can be found by going to "Start" > "Control Panel", search for environment. Click "edit the system environment variables". In the section "System Variables", find the `PATH` environment variable and select it. Click Edit. If the `PATH` environment variable does not exist, click New. In the Edit System Variable (or New System Variable) window, specify the value of the `PATH` environment variable. Click OK. Close all remaining windows by clicking OK. Do the same for `JAVA_HOME`. + +Before setting up the ArchivesSpace service, you should also [configure ArchivesSpace to run against MySQL](/provisioning/mysql). +Be sure that the MySQL connector jar file is in the lib directory, in order for +the service setup script to add it to the application's classpath. + +Lastly, for the service to shutdown cleanly, uncomment and change these lines in +config/config.rb: + +```ruby +AppConfig[:use_jetty_shutdown_handler] = true +AppConfig[:jetty_shutdown_path] = "/xkcd" +``` + +This enables a shutdown hook for Jetty to respond to when the shutdown action +is taken. + +You can now execute the batch script from your ArchivesSpace root directory from +the command line with `launcher\service.bat`. This will configure the service and +provide two executables: `ArchivesSpaceService.exe` (the service) and +`ArchivesSpaceServicew.exe` (a GUI monitor) + +There are several options to launch the service. The easiest is to open the GUI +monitor and click "Launch". + +Alternatively, you can start the GUI monitor and minimize it in your +system tray with: + +```shell +ArchivesSpaceServicew.exe //MS// +``` + +To execute the service from the command line, you can invoke: + +```shell +ArchivesSpaceService.exe //ES// +``` + +Log output will be placed in your ArchivesSpace log directory. + +Please see the [procrun +documentation](http://commons.apache.org/proper/commons-daemon/procrun.html) +for more information. diff --git a/src/content/docs/de/api/index.md b/src/content/docs/de/api/index.md new file mode 100644 index 0000000..3f79dc2 --- /dev/null +++ b/src/content/docs/de/api/index.md @@ -0,0 +1,486 @@ +--- +title: Working with the API +description: General information about working with the API, including authentication, get, and post requests with examples. +--- + +:::tip +This documentation provides general information on working with the API. For detailed documentation of specific endpoints, see the [API reference](http://archivesspace.github.io/archivesspace/api/), which is maintained separately. +::: + +## Authentication + +Most actions against the backend require you to be logged in as a user +with the appropriate permissions. By sending a request like: + + POST /users/admin/login?password=login + +your authentication request will be validated, and a session token +will be returned in the JSON response for your request. To remain +authenticated, provide this token with subsequent requests in the +`X-ArchivesSpace-Session` header. For example: + + X-ArchivesSpace-Session: 8e921ac9bbe9a4a947eee8a7c5fa8b4c81c51729935860c1adfed60a5e4202cb + +Since not all backend/API end points require authentication, it is best to restrict access to port 8089 to only IP addresses you trust. Your firewall should be used to specify a range of IP addresses that are allowed to call your ArchivesSpace API endpoint. This is commonly called whitelisting or allowlisting. + +### Example requests using CURL + +Send request to authenticate: + +```shell +curl -s -F password="admin" "http://localhost:8089/users/admin/login" +``` + +This will return a JSON response that includes something like the following: + +<!-- prettier-ignore --> +```json +{ + "session":"9528190655b979f00817a5d38f9daf07d1686fed99a1d53dd2c9ff2d852a0c6e", + .... +} +``` + +It’s a good idea to save the session key as an environment variable to use for later requests: + +```shell +#Mac/Unix terminal +export SESSION="9528190655b979f00817a5d38f9daf07d1686fed99a1d53dd2c9ff2d852a0c6e" + +#Windows Command Prompt +set SESSION="9528190655b979f00817a5d38f9daf07d1686fed99a1d53dd2c9ff2d852a0c6e" + +#Windows PowerShell +$env:SESSION="9528190655b979f00817a5d38f9daf07d1686fed99a1d53dd2c9ff2d852a0c6e" +``` + +Now you can make requests like this: + +```shell +curl -H "X-ArchivesSpace-Session: $SESSION" "http://localhost:8089/repositories/2/resources/1 +``` + +## CRUD + +The ArchivesSpace API provides CRUD-style interactions for a number of +different "top-level" record types. Working with records follows a +fairly standard pattern: + + # Get a paginated list of accessions from repository '123' + GET /repositories/123/accessions?page=1 + + # Create a new accession, returning the ID of the new record + POST /repositories/123/accessions + {... a JSON document satisfying JSONModel(:accession) here ...} + + # Get a single accession (returned as a JSONModel(:accession) instance) using the ID returned by the previous request + GET /repositories/123/accessions/456 + + # Update an existing accession + POST /repositories/123/accessions/456 + {... a JSON document satisfying JSONModel(:accession) here ...} + +## Performing API requests + +### GET requests + +#### Resolving associated records + +The :resolve parameter is a way to tell ArchivesSpace to attach the full object to these refs; it is passed in as an +array of keys to "prefetch" in the returned JSON. The object is included in the ref under a \_resolved key. + +For example, to find an archival object by a ref_id and return the found archival object, you can attach +`resolve[]: "archival_objects"` within your request. + +##### Shell Example + +> ```shell +> curl -s -F password="admin" "http://localhost:8089/users/admin/login" +> # Replace "admin" with your password and "http://localhost:8089/users/admin/login" with your ASpace API URL +> # followed by "/users/{your_username}/login" +> +> set SESSION="session_id" +> # If using a unix-like shell, replace set with export +> +> curl -H "X-ArchivesSpace-Session: $SESSION" // +> "http://localhost:8089/repositories/:repo_id:/find_by_id/archival_objects?ref_id[]=hello_im_a_ref_id;resolve[]=archival_objects" +> # Replace "http://localhost:8089" with your ASpace API URL, :repo_id: with the repository ID, +> # "hello_im_a_ref_id" with the ref ID you are searching for, and only add +> # "resolve[]=archival_objects" if you want the JSON for the returned record - otherwise, it will return the +> # record URI only +> ``` + +##### Python Example + +> ```python +> from asnake.client import ASnakeClient # import the ArchivesSnake client +> +> client = ASnakeClient(baseurl="http://localhost:8089", username="admin", password="admin") +> # Replace "http://localhost:8089" with your ArchivesSpace API URL and "admin" for your username and password +> +> client.authorize() # authorizes the client +> +> find_ao_refid = client.get("repositories/:repo_id:/find_by_id/archival_objects", +> params={"ref_id[]": "hello_im_a_ref_id", +> "resolve[]": "archival_objects"}) +> # Replace :repo_id: with the repository ID, "hello_im_a_ref_id" with the ref ID you are searching for, and only add +> # "resolve[]": "archival_objects" if you want the JSON for the returned record - otherwise, it will return the +> # record URI only +> +> print(find_ao_refid.json()) +> # Output (dict): {'archival_objects': [{'ref': '/repositories/2/archival_objects/708425', '_resolved':...}]} +> ``` + +#### Requests for paginated results + +Endpoints that represent groups of objects, rather than single objects, tend to be paginated. Paginated endpoints are called out in the documentation as special, with some version of the following content appearing: +This endpoint is paginated. :page, :id_set, or :all_ids is required + + Integer page – The page set to be returned + Integer page_size – The size of the set to be returned ( Optional. default set in AppConfig ) + Comma separated list id_set – A list of ids to request resolved objects ( Must be smaller than default page_size ) + Boolean all_ids – Return a list of all object ids + +These endpoints support some or all of the following: + + paged access to objects (via :page) + listing all matching ids (via :all_ids) + fetching specific known objects via their database ids (via :id_set) + +##### Shell Example + +> ```shell +> curl -s -F password="admin" "http://localhost:8089/users/admin/login" +> # Replace "admin" with your password and "http://localhost:8089/users/admin/login" with your ASpace API URL +> # followed by "/users/{your_username}/login" +> +> set SESSION="session_id" +> # If using a unix-like shell, replace set with export +> +> # For all archival objects, use all_ids +> curl -H "X-ArchivesSpace-Session: $SESSION" // +> "http://localhost:8089/repositories/2/archival_objects?all_ids=true" +> +> # For a set of archival objects, use id_set +> curl -H "X-ArchivesSpace-Session: $SESSION" // +> "http://localhost:8089/repositories/2/archival_objects?id_set=707458&id_set=707460&id_set=707461" +> +> # For a page of archival objects, use page and page_size +> "http://localhost:8089/repositories/2/archival_objects?page=1&page_size=10" +> ``` + +> Python example needed + +#### Working with long results sets + +When working with search results using page and page_size parameters, many results can be returned and managing those +results can be difficult. See the Python example below for demonstrating how to take a large result set and iterating +through it to search for archival objects from a paginated result. + +##### Python Example + +> ```python +> from asnake.client import ASnakeClient # import the ArchivesSnake client +> +> client = ASnakeClient(baseurl="http://localhost:8089", username="admin", password="admin") +> # Replace http://localhost:8089 with your ArchivesSpace API URL and admin for your username and password +> +> client.authorize() # authorizes the client +> +> # To get a page of archival objects with a set page size, use "page" and "page_size" parameters +> get_repo_aos_pages = client.get("repositories/2/archival_objects", params={"page": 1, "page_size": 10}) +> # Replace 2 for your repository ID. Find this in the URI of your archival object on the bottom right of the +> # Basic Information section in the staff interface +> +> print(get_repo_aos_pages.json()) +> # Output (dictionary): {'first_page': 1, 'last_page': 26949, 'this_page': 1, 'total': 269488, +> # 'results': [{'lock_version': 1, 'position': 0,...]...} +> +> result_count = len(get_repo_aos_pages.json()) # Get us the count of results back +> for result in get_repo_aos_pages.json(): +> json_info = json.loads(result["json"]) +> for key, value in json_info.items(): +> id_match = id_field_regex.match(key) +> ``` + +#### Search requests + +A number of routes in the ArchivesSpace API are designed to search for content across all or part of the records in the +application. These routes make use of Solr, a component bundled with ArchivesSpace and used to provide full text search +over records. + +The search routes present in the application as of this time are: + +- Search this archive +- Search across repositories +- Search this repository +- Search across subjects +- Search for top containers +- Search across location profiles + +Search routes take quite a few different parameters, most of which correspond directly to Solr query parameters. The +most important parameter to understand is q, which is the query sent to Solr. This query is made in Lucene query +syntax. The relevant docs are in the Solr Ref Guide's [The Standard Query Parser](https://solr.apache.org/guide/6_6/the-standard-query-parser.html#the-standard-query-parser) webpage. + +To limit a search to records of a particular type or set of types, you can use the 'type' parameter. This is only +relevant for search endpoints that aren't limited to specific types. Note that type is expected to be a list of types, +even if there is only one type you care about. + +##### Notes on search routes and results + +ArchivesSpace represents records as JSONModel Objects - this is what you get from and send to the system. + +SOLR takes these records, and stores "documents" BASED ON these JSONModel objects in a searchable index. + +Search routes query these documents, NOT the records themselves as stored in the database and represented by JSONModel. + +JSONModel objects and SOLR documents are similar in some ways: + +- Both SOLR documents and JSONModel Objects are expressed in JSON +- In general, documents will always contain some subset of the JSONModel object they represent + +But they also differ in quite a few important ways: + +- SOLR documents don't necessarily have all fields from a JSONModel object +- SOLR documents do not automatically contain nested JSONModel Objects +- SOLR documents can have fields defined that are arbitrary "search representations" of fields in associated records, + or combinations of fields in a record +- SOLR documents don't have a jsonmodel_type field - the jsonmodel_type of the record is stored as primary_type in SOLR + +How do I get the actual JSONModel from a search document? + +In ArchivesSpace, SOLR documents all have a field json, which contains the JSONModel Object the document represents as +a string. You can use a JSON library to parse this string from the field, for example the json library in Python. + +##### Shell Example + +> ```shell +> +> # auto-generated example +> curl -H "X-ArchivesSpace-Session: $SESSION" \ +> "http://localhost:8089/search/repositories?q=&aq=%7B%22jsonmodel_type%22%3D%3E%22advanced_query%22%2C+%22query%22%3D%3E%7B%22jsonmodel_type%22%3D%3E%22boolean_query%22%2C+%22op%22%3D%3E%22AND%22%2C+%22subqueries%22%3D%3E%5B%7B%22jsonmodel_type%22%3D%3E%22date_field_query%22%2C+%22negated%22%3D%3Efalse%2C+%22comparator%22%3D%3E%22empty%22%2C+%22field%22%3D%3E%22QSUC205%22%2C+%22value%22%3D%3E%222018-03-26%22%7D%5D%7D%7D&type%5B%5D=&sort=&facet%5B%5D=&facet_mincount=1&filter=%7B%22jsonmodel_type%22%3D%3E%22advanced_query%22%2C+%22query%22%3D%3E%7B%22jsonmodel_type%22%3D%3E%22boolean_query%22%2C+%22op%22%3D%3E%22AND%22%2C+%22subqueries%22%3D%3E%5B%7B%22jsonmodel_type%22%3D%3E%22date_field_query%22%2C+%22negated%22%3D%3Efalse%2C+%22comparator%22%3D%3E%22empty%22%2C+%22field%22%3D%3E%22QSUC205%22%2C+%22value%22%3D%3E%222018-03-26%22%7D%5D%7D%7D&filter_query%5B%5D=&exclude%5B%5D=&hl=BooleanParam&root_record=&dt=&fields%5B%5D=" +> +> # auto-generated example +> curl -H 'Content-Type: text/json' -H "X-ArchivesSpace-Session: $SESSION" \ +> "http://localhost:8089/search/repositories" \ +> -d '{ +> "aq": { +> "jsonmodel_type": "advanced_query", +> "query": { +> "jsonmodel_type": "boolean_query", +> "op": "AND", +> "subqueries": [ +> { +> "jsonmodel_type": "date_field_query", +> "negated": false, +> "comparator": "empty", +> "field": "QSUC205", +> "value": "2018-03-26" +> } +> ] +> } +> }, +> "facet_mincount": "1", +> "filter": { +> "jsonmodel_type": "advanced_query", +> "query": { +> "jsonmodel_type": "boolean_query", +> "op": "AND", +> "subqueries": [ +> { +> "jsonmodel_type": "date_field_query", +> "negated": false, +> "comparator": "empty", +> "field": "QSUC205", +> "value": "2018-03-26" +> } +> ] +> } +> }, +> "hl": "BooleanParam" +> }' +> ``` + +### POST requests + +#### Updating existing records + +For updating existing records, it's recommended to first do a GET request for the record you want to update. This +ensures that the data you are updating is the most accurate and reduces the chance of inadvertently removing data that +was there previously but may be lost if the data is not included in the subsequent update. After getting the original +record data, you can update it as needed and then do a POST request with the updated data. Make sure that the updated +data is JSON formatted and is passed either through the `-d` or `--data` parameter or `json` parameter if using +ArchivesSnake. + +##### Shell Example + +> ```shell +> curl -s -F password="admin" "http://localhost:8089/users/admin/login" +> # Replace "admin" with your password and "http://localhost:8089" with your ASpace API URL followed by +> # "/users/{your_username}/login" +> +> set SESSION="session_id" +> # If using a unix-like shell, replace set with export +> +> curl -H 'Content-Type: text/json' -H "X-ArchivesSpace-Session: $SESSION" \\ +> "http://localhost:8089/repositories/:repo_id:/groups/:group_id:" \\ +> -d '{"group_code": "test-group_managers", +> "lock_version": 4, +> "description": "Test group managers of the Manuscripts repository", +> "jsonmodel_type": "group", +> "member_usernames": [ +> "manager", "advance"]}' +> # Replace http://localhost:8089 with your ArchivesSpace API URL, :repo_id: with the repository ID number, +> # :group_id: with the group ID number you want to update, and the data found after -d with the data you want +> # updating the group. Be sure to include "lock_version" and the most recent number for it. You can find the +> # most recent lock_version by submitting a get request, like so: curl -H "X-ArchivesSpace-Session: $SESSION" \ +> # "http://localhost:8089/repositories/:repo_id:/groups/:group_id:" +> +> # Output: +> # {"status":"Updated","id":23,"lock_version":5,"stale":null,"uri":"/repositories/2/groups/23","warnings":[]} +> ``` + +##### Python Example + +> ```python +> from asnake.client import ASnakeClient # import the ArchivesSnake client +> client = ASnakeClient(baseurl="http://localhost:8089", username="admin", password="admin") +> # replace http://localhost:8089 with your ArchivesSpace API URL and admin for your username and password +> +> client.authorize() # authorizes the client +> +> get_user_group = client.get("repositories/:repo_id:/groups/:group_id:").json() +> # Retrieve the data from the group you are trying to update. Replace :repo_id: with the repository ID number and +> # :group_id: with the group ID number you want to update +> +> get_user_group["member_usernames"].append("advance") +> # An example of how to modify a group record. For a list of all the fields you can update, +> # print(get_user_group). Here we append a new user 'advance' to the list of users associated with this group. +> +> update_user_group = get_user_group +> # Assign the newly updated get_user_group to update_user_group - to help make it clearer to see. +> +> update_status = client.post("repositories/:repo_id:/groups/:group_id:", json=update_user_group) +> # Replace :repo_id: with the repository ID number and :group_id: with the group ID number you want to update +> +> print(update_status.json()) +> # Output: +> # {'status': 'Updated', 'id': 48, 'lock_version': 1, 'stale': None, 'uri': '/repositories/2/groups/48', +> # 'warnings': []} +> ``` + +#### Creating new records + +When creating new records, it's recommended to do a GET request on the type of record you are wanting to create. This +example record is useful for seeing what fields are included for that specific record. Not all fields are required, for +example, the `created` and `modified` fields are not necessary when creating a new record, since those fields are +handled automatically. Others, such as `title` and `jsonmodel_type` are required. + +After examining an existing record for reference, craft your JSON-formatted data and make a POST request. Make sure +that the new record is passed either through the `-d` or `--data` parameter or `json` parameter if using ArchivesSnake. + +##### Shell Example + +> ```shell +> # Create a new user group within the SHELL +> curl -s -F password="admin" "http://localhost:8089/users/admin/login" +> # Replace "admin" with your password and "http://localhost:8089" with your ASpace API URL followed by +> # "/users/{your_username}/login" +> +> set SESSION="session_id" +> # If using a unix-like shell, replace set with export +> +> curl -H "X-ArchivesSpace-Session: $SESSION" "http://localhost:8089/repositories/:repo_id:/groups/" \\ +> -d '{"group_code": "test-group_managers", +> "description": "Test group managers of the Manuscripts repository", +> "jsonmodel_type": "group"}' +> # Replace "http://localhost:8089" with your ASpace API URL, :repo_id: with the repository ID, and +> # the data found in -d with the metadata you want to create the new user group. +> +> # Output +> # {"status":"Created","id":24,"lock_version":0,"stale":null,"uri":"/repositories/2/groups/24","warnings":[]} +> ``` + +##### Python Example + +> ```python +> # Create a new user group using Python and ArchivesSnake +> from asnake.client import ASnakeClient # import the ArchivesSnake client +> +> client = ASnakeClient(baseurl="http://localhost:8089", username="admin", password="admin") +> # replace http://localhost:8089 with your ArchivesSpace API URL and admin for your username and password +> +> client.authorize() # authorizes the client +> +> new_group = { +> "group_code": "test-group_managers", +> "description": "Test group managers of the Manuscripts repository", +> "jsonmodel_type": "group", +> "member_usernames": [ +> "manager" +> ], +> "grants_permissions": [ +> "cancel_job", +> "manage_enumeration_record"] +> } +> # This is a sample user group that exceeds the minimum requirements. The minimum requirements are: +> # jsonmodel_type, description, and group_code. grants_permissions is optional, these values can be looked up in +> # the ASpace database within the permissions table +> +> post_user_group = client.post("repositories/:repo_id:/groups", json=new_group) +> # Replace :repo_id: with the ArchivesSpace repository ID and new_group with the json data to create a new user +> # group +> +> print(post_user_group.json()) +> # Output: +> # {'status': 'Created', 'id': 23, 'lock_version': 0, 'stale': None, 'uri': '/repositories/2/groups/23', +> # 'warnings': []} +> ``` + +### DELETE requests + +Delete requests using the API permanently deletes any record, just like within the staff interface. Be careful! Make +sure you want to delete the entire record before doing so. If you want to delete parts of a record, for example some +notes or other fields, see [Updating existing records](####Updating existing records). + +To delete a record, retrieve the record's ArchivesSpace generated ID and use the `DELETE` command for SHELL or +`client.delete`if using the ArchivesSnake Python library. + +##### Shell Example + +> ```shell +> # Delete a user group within the SHELL +> curl -s -F password="admin" "http://localhost:8089/users/admin/login" +> # Replace "admin" with your password and "http://localhost:8089" with your ASpace API URL followed by +> # "/users/{your_username}/login" +> +> set SESSION="session_id" +> # If using a unix-like shell, replace set with export +> +> curl -H "X-ArchivesSpace-Session: $SESSION" \\ +> -X DELETE "http://localhost:8089/repositories/:repo_id:/groups/:group_id:" +> # Replace "http://localhost:8089" with your ASpace API URL, :repo_id: with the repository ID, and +> # :group_id: with the ID of the group you want to delete (usually found in the URL of the user group when +> # viewing in the staff interface). Deleting is permanent so make sure to test this first! +> +> # Output: {"status":"Deleted","id":47} +> ``` + +##### Python Example + +> ```python +> # Delete a user group from a repository using Python and ArchivesSnake +> from asnake.client import ASnakeClient # import the ArchivesSnake client +> +> client = ASnakeClient(baseurl="http://localhost:8089", username="admin", password="admin") +> # replace http://localhost:8089 with your ArchivesSpace API URL and admin for your username and password +> +> client.authorize() # authorizes the client +> +> delete_user_group = client.delete("repositories/:repo_id:/groups/:group_id:") +> # Replace :repo_id: with the ArchivesSpace repository ID and :group_id: with the ArchivesSpace ID of the +> # user group you want to delete. Deleting is permanent so make sure to test this first! +> +> print(delete_user_group.json()) +> # Output: {'status': 'Deleted', 'id': 23} +> ``` diff --git a/src/content/docs/de/architecture/api.md b/src/content/docs/de/architecture/api.md new file mode 100644 index 0000000..474cf47 --- /dev/null +++ b/src/content/docs/de/architecture/api.md @@ -0,0 +1,48 @@ +--- +title: API +description: Instructions for how to authenticate when trying to connect to a backend session, such as through the API, along with examples of common requests for getting and posting data. +--- + +:::note +See the [API section](/api/index) for more detailed documentation. +::: + +## Authentication + +Most actions against the backend require you to be logged in as a user +with the appropriate permissions. By sending a request like: + +``` +POST /users/admin/login?password=login +``` + +your authentication request will be validated, and a session token +will be returned in the JSON response for your request. To remain +authenticated, provide this token with subsequent requests in the +`X-ArchivesSpace-Session` header. For example: + +``` +X-ArchivesSpace-Session: 8e921ac9bbe9a4a947eee8a7c5fa8b4c81c51729935860c1adfed60a5e4202cb +``` + +## CRUD + +The ArchivesSpace API provides CRUD-style interactions for a number of +different "top-level" record types. Working with records follows a +fairly standard pattern: + +``` +# Get a paginated list of accessions from repository '123' +GET /repositories/123/accessions?page=1 + +# Create a new accession, returning the ID of the new record +POST /repositories/123/accessions +{... a JSON document satisfying JSONModel(:accession) here ...} + +# Get a single accession (returned as a JSONModel(:accession) instance) using the ID returned by the previous request +GET /repositories/123/accessions/456 + +# Update an existing accession +POST /repositories/123/accessions/456 +{... a JSON document satisfying JSONModel(:accession) here ...} +``` diff --git a/src/content/docs/de/architecture/archivesspace_architecture.svg b/src/content/docs/de/architecture/archivesspace_architecture.svg new file mode 100644 index 0000000..e7ded40 --- /dev/null +++ b/src/content/docs/de/architecture/archivesspace_architecture.svg @@ -0,0 +1,105 @@ +<svg width="100%" viewBox="0 0 680 560" xmlns="http://www.w3.org/2000/svg"> +<defs> +<marker id="arrow" viewBox="0 0 10 10" refX="8" refY="5" markerWidth="6" markerHeight="6" orient="auto-start-reverse"> +<path d="M2 1L8 5L2 9" fill="none" stroke="context-stroke" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/> +</marker> +</defs> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="40" y="22" width="160" height="42" rx="8" stroke-width="0.5" style="fill:rgb(8, 80, 65);stroke:rgb(93, 202, 165);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="120" y="43" text-anchor="middle" dominant-baseline="central" style="fill:rgb(159, 225, 203);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Logged-in users</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="265" y="22" width="150" height="42" rx="8" stroke-width="0.5" style="fill:rgb(68, 68, 65);stroke:rgb(180, 178, 169);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="340" y="43" text-anchor="middle" dominant-baseline="central" style="fill:rgb(211, 209, 199);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Internet</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="480" y="22" width="160" height="42" rx="8" stroke-width="0.5" style="fill:rgb(113, 43, 19);stroke:rgb(240, 153, 123);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="560" y="43" text-anchor="middle" dominant-baseline="central" style="fill:rgb(245, 196, 179);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Anonymous users</text> +</g> + +<line x1="200" y1="43" x2="265" y2="43" stroke="#0F6E56" stroke-width="1.5" fill="none" marker-end="url(#arrow)" style="fill:none;stroke:rgb(15, 110, 86);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<line x1="480" y1="43" x2="415" y2="43" stroke="#993C1D" stroke-width="1.5" fill="none" marker-end="url(#arrow)" style="fill:none;stroke:rgb(153, 60, 29);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<path d="M310,64 C300,108 105,96 105,138" fill="none" stroke="#0F6E56" stroke-width="1.5" marker-end="url(#arrow)" style="fill:none;stroke:rgb(15, 110, 86);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<path d="M370,64 C380,108 547,96 547,138" fill="none" stroke="#993C1D" stroke-width="1.5" marker-end="url(#arrow)" style="fill:none;stroke:rgb(153, 60, 29);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<rect x="15" y="115" width="650" height="145" rx="12" fill="none" stroke="var(--color-border-secondary)" stroke-width="0.5" stroke-dasharray="6 4" style="fill:none;stroke:rgba(222, 220, 209, 0.3);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-dasharray:6px, 4px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="290" y="104" width="100" height="22" rx="11" stroke-width="0.5" style="fill:rgb(12, 68, 124);stroke:rgb(133, 183, 235);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="340" y="115" text-anchor="middle" dominant-baseline="central" style="fill:rgb(181, 212, 244);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Frontend</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="20" y="138" width="170" height="58" rx="8" stroke-width="0.5" style="fill:rgb(12, 68, 124);stroke:rgb(133, 183, 235);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="105" y="155" text-anchor="middle" dominant-baseline="central" style="fill:rgb(181, 212, 244);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Staff UI</text> +<text x="105" y="173" text-anchor="middle" dominant-baseline="central" style="fill:rgb(133, 183, 235);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JRuby · Rails · jQuery</text> +</g> +<line x1="36" y1="192" x2="174" y2="192" stroke="#0F6E56" stroke-width="2" stroke-linecap="round" style="fill:rgb(0, 0, 0);stroke:rgb(15, 110, 86);color:rgb(251, 251, 254);stroke-width:2px;stroke-linecap:round;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="248" y="138" width="170" height="58" rx="8" stroke-width="0.5" style="fill:rgb(12, 68, 124);stroke:rgb(133, 183, 235);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="333" y="158" text-anchor="middle" dominant-baseline="central" style="fill:rgb(181, 212, 244);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Background jobs</text> +<text x="333" y="176" text-anchor="middle" dominant-baseline="central" style="fill:rgb(133, 183, 235);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JRuby · Ruby</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="462" y="138" width="170" height="58" rx="8" stroke-width="0.5" style="fill:rgb(12, 68, 124);stroke:rgb(133, 183, 235);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="547" y="155" text-anchor="middle" dominant-baseline="central" style="fill:rgb(181, 212, 244);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Public UI</text> +<text x="547" y="173" text-anchor="middle" dominant-baseline="central" style="fill:rgb(133, 183, 235);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JRuby · Rails · jQuery</text> +</g> +<line x1="478" y1="192" x2="616" y2="192" stroke="#993C1D" stroke-width="2" stroke-linecap="round" style="fill:rgb(0, 0, 0);stroke:rgb(153, 60, 29);color:rgb(251, 251, 254);stroke-width:2px;stroke-linecap:round;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<line x1="190" y1="167" x2="248" y2="167" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<path d="M105,196 C105,258 80,258 80,330" fill="none" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<path d="M333,196 C333,262 120,262 120,330" fill="none" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<path d="M547,196 C547,268 160,268 160,330" fill="none" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<rect x="15" y="310" width="650" height="115" rx="12" fill="none" stroke="var(--color-border-secondary)" stroke-width="0.5" stroke-dasharray="6 4" style="fill:none;stroke:rgba(222, 220, 209, 0.3);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-dasharray:6px, 4px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="290" y="299" width="100" height="22" rx="11" stroke-width="0.5" style="fill:rgb(8, 80, 65);stroke:rgb(93, 202, 165);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="340" y="310" text-anchor="middle" dominant-baseline="central" style="fill:rgb(159, 225, 203);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Backend</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="50" y="330" width="185" height="68" rx="8" stroke-width="0.5" style="fill:rgb(8, 80, 65);stroke:rgb(93, 202, 165);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="142" y="352" text-anchor="middle" dominant-baseline="central" style="fill:rgb(159, 225, 203);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">ArchivesSpace API</text> +<text x="142" y="369" text-anchor="middle" dominant-baseline="central" style="fill:rgb(93, 202, 165);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JRuby · Sinatra</text> +<text x="142" y="385" text-anchor="middle" dominant-baseline="central" style="fill:rgb(93, 202, 165);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JSONModel</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="435" y="330" width="195" height="68" rx="8" stroke-width="0.5" style="fill:rgb(8, 80, 65);stroke:rgb(93, 202, 165);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="532" y="352" text-anchor="middle" dominant-baseline="central" style="fill:rgb(159, 225, 203);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Indexer</text> +<text x="532" y="369" text-anchor="middle" dominant-baseline="central" style="fill:rgb(93, 202, 165);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JRuby · Sinatra</text> +<text x="532" y="385" text-anchor="middle" dominant-baseline="central" style="fill:rgb(93, 202, 165);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JSONModel</text> +</g> + +<text x="340" y="346" text-anchor="middle" style="fill:rgb(194, 192, 182);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:auto">monitors updates</text> +<line x1="435" y1="359" x2="235" y2="359" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<rect x="15" y="450" width="650" height="95" rx="12" fill="none" stroke="var(--color-border-secondary)" stroke-width="0.5" stroke-dasharray="6 4" style="fill:none;stroke:rgba(222, 220, 209, 0.3);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-dasharray:6px, 4px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="290" y="439" width="100" height="22" rx="11" stroke-width="0.5" style="fill:rgb(99, 56, 6);stroke:rgb(239, 159, 39);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="340" y="450" text-anchor="middle" dominant-baseline="central" style="fill:rgb(250, 199, 117);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Storage</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="50" y="462" width="185" height="58" rx="8" stroke-width="0.5" style="fill:rgb(99, 56, 6);stroke:rgb(239, 159, 39);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="142" y="482" text-anchor="middle" dominant-baseline="central" style="fill:rgb(250, 199, 117);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">MySQL</text> +<text x="142" y="500" text-anchor="middle" dominant-baseline="central" style="fill:rgb(239, 159, 39);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">Primary data store</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="435" y="462" width="195" height="58" rx="8" stroke-width="0.5" style="fill:rgb(99, 56, 6);stroke:rgb(239, 159, 39);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="532" y="482" text-anchor="middle" dominant-baseline="central" style="fill:rgb(250, 199, 117);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Apache Solr</text> +<text x="532" y="500" text-anchor="middle" dominant-baseline="central" style="fill:rgb(239, 159, 39);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">Search index · Java</text> +</g> + +<line x1="142" y1="398" x2="142" y2="462" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<line x1="532" y1="398" x2="532" y2="462" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +</svg> \ No newline at end of file diff --git a/src/content/docs/de/architecture/backend.md b/src/content/docs/de/architecture/backend.md new file mode 100644 index 0000000..e44a9ad --- /dev/null +++ b/src/content/docs/de/architecture/backend.md @@ -0,0 +1,422 @@ +--- +title: Backend +description: Describes the architecture behind the backend of ArchivesSpace, including the main.rb and rest.rb files for initiating ArchivesSpace and defining API mechanisms, controllers, models, nested records, relationships, agents, validation, optimistic concurrency control, and the permissions model. +--- + +The backend is responsible for implementing the ArchivesSpace API, and +supports the sort of access patterns shown in the previous section. +We've seen that the backend must support CRUD operations against a +number of different record types, and those records as expressed as +JSON documents produced from instances of JSONModel classes. + +The following sections describe how the backend fits together. + +## main.rb -- load and initialize the system + +The `main.rb` program is responsible for starting the ArchivesSpace +system: loading all controllers and models, creating +users/groups/permissions as needed, and preparing the system to handle +requests. + +When the system starts up, the `main.rb` program performs the +following actions: + +- Initializes JSONModel--triggering it to load all record schemas + from the filesystem and generate the classes that represent each + record type. +- Connects to the database +- Loads all backend models--the system's domain objects and + persistence layer +- Loads all controllers--defining the system's REST endpoints +- Starts the job scheduler--handling scheduled tasks such as backups + of the demo database (if used) +- Runs the "bootstrap ACLs" process--creates the admin user and + group if they don't already exist; creates the hidden global + repository; creates system users and groups. +- Fires the "backend started" notification to any registered + observers. + +In addition to handling the system startup, `main.rb` also provides +the following facilities: + +- Session handling--tracks authenticated backend sessions using the + token extracted from the `X-ArchivesSpace-Session` request header. +- Helper methods for accessing the current user and current session + of each request. + +## rest.rb -- Request and response handling for REST endpoints + +The `rest.rb` module provides the mechanism used to define the API's +REST endpoints. Each endpoint definition includes: + +- The URI and HTTP request method used to access the endpoint +- A list of typed parameters for that endpoint +- Documentation for the endpoint, each parameter, and each possible + response that may be returned +- Permission checks--predicates that the current user must satisfy + to be able to use the endpoint + +Each controller in the system consists of one or more of these +endpoint definitions. By using the endpoint syntax provided by +`rest.rb`, the controllers can declare the interface they provide, and +are freed of having to perform the sort of boilerplate associated +with request handling--check parameter types, coerce values from +strings into other types, and so on. + +The `main.rb` and `rest.rb` components work together to insulate the +controllers from much of the complexity of request handling. By the +time a request reaches the body of an endpoint: + +- It can be sure that all required parameters are present and of the + correct types. +- The body of the request has been fetched, parsed into the + appropriate type (usually a JSONModel instance--see below) and + made available as a request parameter. +- Any parameters provided by the client that weren't present in the + endpoint definition have been dropped. +- The user's session has been retrieved, and any defined access + control checks have been carried out. +- A connection to the database has been assigned to the request, and + a transaction has been opened. If the controller throws an + exception, the transaction will be automatically rolled back. + +## Controllers + +As touched upon in the previous section, controllers implement the +functionality of the ArchivesSpace API by registering one or more +endpoints. Each endpoint accepts a HTTP request for a given URI, +carries out the request and returns a JSON response (if successful) or +throws an exception (if something goes wrong). + +Each controller lives in its own file, and these can be found in the +`backend/app/controllers` directory. Since most of the request +handling logic is captured by the `rest.rb` module, controllers +generally don't do much more than coordinate the classes from the +model layer and send a response back to the client. + +### crud_helpers.rb -- capturing common CRUD controller actions + +Even though controllers are quite thin, there's still a lot of overlap +in their behaviour. Each record type in the system supports the same +set of CRUD operations, and from the controller's point of view +there's not much difference between an update request for an accession +and an update request for a digital object (for example). + +The `crud_helpers.rb` module pulls this commonality into a set of +helper methods that are invoked by each controller, providing methods +for the standard operations of the system. + +## Models + +The backend's model layer is where the action is. The model layer's +role is to bridge the gap between the high-level JSONModel objects +(complete with their properties, nested records, references to other +records, etc.) and the underlying relational database (via the Sequel +database toolkit). As such, the model layer is mainly concerned with +mapping JSONModel instances to database tables in a way that preserves +everything and allows them to be queried efficiently. + +Each record type has a corresponding model class, but the individual +model definitions are often quite sparse. This is because the +different record types differ in the following ways: + +- The set of properties they allow (and their types, valid values, + etc.) +- The types of nested records they may contain +- The types of relationships they may have with other record types + +The first of these--the set of allowable properties--is already +captured by the JSONModel schema definitions, so the model layer +doesn't have to enforce these restrictions. Each model can simply +take the values supplied by the JSONModel object it is passed and +assume that everything that needs to be there is there, and that +validation has already happened. + +The remaining two aspects _are_ enforced by the model layer, but +generally don't pertain to just a single record type. For example, an +accession may be linked to zero or more subjects, but so can several +other record types, so it doesn't make sense for the `Accession` model +to contain the logic for handling subjects. + +In practice we tend to see very little functionality that belongs +exclusively to a single record type, and as a result there's not much +to put in each corresponding model. Instead, models are generally +constructed by combining a number of mix-ins (Ruby modules) to satisfy +the requirements of the given record type. Features à la carte! + +### ASModel and other mix-ins + +At a minimum, every model includes the `ASModel` mix-in, which provides +base versions of the following methods: + +- `Model.create_from_json` -- Take a JSONModel instance and create a + model instance (a subclass of Sequel::Model) from it. Returns the + instance. +- `model.update_from_json` -- Update the target model instance with + the values from a given JSONModel instance. +- `Model.sequel_to_json` -- Return a JSONModel instance of the appropriate + type whose values are taken from the target model instance. + Model classes are declared to correspond to a particular JSONModel + instance when created, so this method can automatically return a + JSONModel instance of the appropriate type. + +These methods comprise the primary interface of the model layer: +virtually every mix-in in the model layer overrides one or all of +these to add behaviour in a modular way. + +For example, the 'notes' mix-in adds support for multiple notes to be +added to a record type--by mixing this module into a model class, that +class will automatically accept a JSONModel property called 'notes' +that will be stored and retrieved to and from the database as needed. +This works by overriding the three methods as follows: + +- `Model.create_from_json` -- Call 'super' to delegate the creation to + the next mix-in in the chain. When it returns the newly created + object, extract the notes from the JSONModel instance and attach + them to the model instance (saving them in the database). +- `model.update_from_json` -- Call 'super' to save the other updates + to the database, then replace any existing notes entries for the + record with the ones provided by the JSONModel. +- `Model.sequel_to_json` -- Call 'super' to have the next mix-in in + the chain create a JSONModel instance, then pull the stored notes + from the database and poke them into it. + +All of the mix-ins follow this pattern: call 'super' to delegate the +call to the next mix-in in the chain (eventually reaching ASModel), +then manipulate the result to implement the desired behaviour. + +### Nested records + +Some record types, like accessions, digital objects, and subjects, are +_top-level records_, in the sense that they are created independently +of any other record and are addressable via their own URI. However, +there are a number of records that can't exist in isolation, and only +exist in the context of another record. When one record can contain +instances of another record, we call them _nested records_. + +To give an example, the `date` record type is nested within an +`accession` record (among others). When the model layer is asked to +save a JSONModel instance containing nested records, it must pluck out +those records, save them in the appropriate database table, and ensure +that linkages are created within the database to allow them to be +retrieved later. + +This happens often enough that it would be tedious to write code for +each model to handle its nested records, so the ASModel mix-in +provides a declaration to handle this automatically. For example, the +`accession` model uses a definition like: + +```ruby +base.def_nested_record(:the_property => :dates, + :contains_records_of_type => :date, + :corresponding_to_association => :date) +``` + +When creating an accession, this declaration instructs the `Accession` +model to create a database record for each date listed in the "dates" +property of the incoming record. Each of these date records will be +automatically linked to the created accession. + +### Relationships + +A relationship is a link between two top-level records, where the link +is a separate, dynamically generated, model with zero or more +properties of its own. + +For example, the `Event` model can be related to several different +types of records: + +```ruby +define_relationship(:name => :event_link, + :json_property => 'linked_records', + :contains_references_to_types => proc {[Accession, Resource, ArchivalObject]}) +``` + +This declaration generates a custom class that models the relationship +between events and the other record types. The corresponding JSON +schema declaration for the `linked_records` property looks like this: + +```ruby +"linked_records" => { + "type" => "array", + "ifmissing" => "error", + "minItems" => 1, + "items" => { + "type" => "object", + "subtype" => "ref", + "properties" => { + "role" => { + "type" => "string", + "dynamic_enum" => "linked_event_archival_record_roles", + "ifmissing" => "error", + }, + "ref" => { + "type" => [{"type" => "JSONModel(:accession) uri"}, + {"type" => "JSONModel(:resource) uri"}, + {"type" => "JSONModel(:archival_object) uri"}, + ...], + "ifmissing" => "error" + }, + ... +``` + +That is, the property includes URI references to other records, plus +an additional "role" property to indicate the nature of the +relationship. The corresponding JSON might then be: + +```ruby +linked_records: [{ref: '/repositories/123/accessions/456', role: 'authorizer'}, ...] +``` + +The `define_relationship` definition automatically makes use of the +appropriate join tables in the database to store this relationship and +retrieve it later as needed. + +### Agents and `agent_manager.rb` + +Agents present a bit of a representational challenge. There are four +types of agents (person, family, corporate entity, software), and at a +high-level they are structured in the same way: each type can contain +one or more name records, zero or more contact records, and a number +of properties. Records that link to agents (via a relationship, for +example) can link to any of the four types so, in some sense, each +agent type implements a common `Agent` interface. + +However, the agent types differ in their details. Agents contain name +records, but the types of those name records correspond to the type of +the agent: a person agent contains a person name record, for example. +So, in spite of their similarities, the different agents need to be +modelled as separate record types. + +The `agent_manager` module captures the high-level similarities +between agents. Each agent model includes the agent manager mix-in: + +```ruby +include AgentManager::Mixin +``` + +and then defines itself declaratively by the provided class method: + +```ruby +register_agent_type(:jsonmodel => :agent_person, + :name_type => :name_person, + :name_model => NamePerson) +``` + +This definition sets up the properties of that agent. It creates: + +- a one_to_many relationship with the corresponding name + type of the agent. +- a one_to_many relationship with the agent_contact table. +- nested record definition which defines the names list of the agent + (so the list of names for the agent are automatically stored in + and retrieved from the database) +- a nested record definition for contact list of the agent. + +## Validations + +As records are added to and updated within the ArchivesSpace system, +they are validated against a number of rules to make sure they are +well-formed and don't conflict with other records. There are two +types of record validation: + +- Record-level validations check that a record is self-consistent: + that it contains all required fields, that its values are of the + appropriate type and format, and that its fields don't contradict + one another. +- System-level validations check that a record makes sense in a + broader context: that it doesn't share a unique identifier with + another record, and that any record it references actually exists. + +Record-level validations can be performed in isolation, while +system-level records require comparing the record to others in the +database. + +System-level validations need to be implemented in the database itself +(as integrity constraints), but record-level validations are often too +complex to be expressed this way. As a result, validations in +ArchivesSpace can appear in one or both of the following layers: + +- At the JSONModel level, validations are captured by JSON schema + documents. Where more flexibility is needed, custom validations + are added to the `common/validations.rb` file, allowing validation + logic to be expressed using arbitrary Ruby code. +- At the database level, validations are captured using database + constraints. Since the error messages yielded by these + constraints generally aren't useful for users, database + constraints are also replicated in the backend's model layer using + Sequel validations, which give more targeted error messages. + +As a general rule, record-level validations are handled by the +JSONModel validations (either through the JSON schema or custom +validations), while system-level validations are handled by the model +and the database schema. + +## Optimistic concurrency control + +Updating a record using the ArchivesSpace API is a two part process: + +```ruby +# Perform a `GET` against the desired record to fetch its JSON +# representation: + +GET /repositories/5/accessions/2 + +# Manipulate the JSON representation as required, and then `POST` +# it back to replace the original: + +POST /repositories/5/accessions/2 +``` + +If two people do this simultaneously, there's a risk that one person +would silently overwrite the changes made by the other. To prevent +this, every record is marked with a version number that it carries in +the `lock_version` property. When the system receives the updated +copy of a record, it checks that the version it carries is still +current; if the version number doesn't match the one stored in the +database, the update request is rejected and the user must re-fetch +the latest version before applying their update. + +## The ArchivesSpace permissions model + +The ArchivesSpace backend enforces access control, defining which +users are allowed to create, read, update, suppress and delete the +records in the system. The major actors in the permissions model are: + +- Repositories -- The main mechanism for partitioning the + ArchivesSpace system. For example, an instance might contain one + repository for each section of an organisation, or one repository + for each major collection. +- Users -- An entity that uses the system--often a person, but + perhaps a consumer of the ArchivesSpace API. The set of users is + global to the system, and a single user may have access to + multiple repositories. +- Records -- A unit of information in the system. Some records are + global (existing outside of any given repository), while some are + repository-scoped (belonging to a single repository). +- Groups -- A set of users _within_ a repository. Each group is + assigned zero or more permissions, which it confers upon its + members. +- Permissions -- An action that a user can perform. For example, A + user with the `update_accession_record` permission is allowed to + update accessions for a repository. + +To summarize, a user can perform an action within a repository if they +are a member of a group that has been assigned permission to perform +that action. + +### Conceptual trickery + +Since they're repository-scoped, groups govern access to repositories. +However, there are several record types that exist at the top-level of +the system (such as the repositories themselves, subjects and agents), +and the permissions model must be able to accommodate these. + +To get around this, we invent a concept: the "global" repository +conceptually contains the whole ArchivesSpace universe. As with other +repositories, the global repository contains groups, and users can be +made members of these groups to grant them permissions across the +entire system. One example of this is the "admin" user, which is +granted all permissions by the "administrators" group of the global +repository; another is the "search indexer" user, which can read (but +not update or delete) any record in the system. diff --git a/src/content/docs/de/architecture/database.md b/src/content/docs/de/architecture/database.md new file mode 100644 index 0000000..37609e0 --- /dev/null +++ b/src/content/docs/de/architecture/database.md @@ -0,0 +1,554 @@ +--- +title: Database +description: Describes the structure of the ArchivesSpace database, including a breakdown between the main, supporting, subrecord, relationship, enumerations, user-setting-permissions, job, and system tables. It also breaks down the specific fields present in the different tables. +--- + +The ArchivesSpace database stores all data that is created within an ArchivesSpace instance. As described in other sections of this documentation, the backend code - particularly the model layer and `ASModel_crud.rb` file - uses the `Sequel` database toolkit to bridge the gap between this underlying data and the JSON objects which are exchanged by the other components of the system. + +Often, querying the database directly is the most efficient and powerful way to retrieve data from ArchivesSpace. It is also possible to use raw SQL queries to create custom reports that can be run by users in the staff interface. Please consult the [Custom Reports](/customization/reports) section of this documentation for additional information on creating custom reports. + +<!-- .See this [plugin](link-to-plugin) for an example. Also --> + +It is recommended that ArchivesSpace be run against MySQL in production, not the included demo database. Instructions on setting up ArchivesSpace to run against MySQL are [here](/provisioning/mysql). + +The examples in this section are written for MySQL. There are many freely-available tutorials on the internet which can provide guidance to those unfamiliar with MySQL query syntax and the features of the language. + +**NOTE**: the documentation below is current through database schema version 129, application version 2.7.1. + +## Database Overview + +The ArchivesSpace database schema and it's mapping to the JSONModel objects used by the other parts of the system is defined by the files in the `common/schemas` and `backend/models` directories. The database itself is created via the `setup-database` script in the `scripts` directory. This script runs the migrations in the `common/db/migrations` directory. + +The tables in the ArchivesSpace database can be grouped into several general categories: + +- [Database Overview](#database-overview) +- [Main record tables](#main-record-tables) +- [Supporting record tables](#supporting-record-tables) +- [Subrecord tables](#subrecord-tables) +- [Relationship tables](#relationship-tables) +- [Enumerations](#enumerations) +- [User, setting, and permission tables](#user-setting-and-permission-tables) +- [Job tables](#job-tables) +- [System tables](#system-tables) +- [Parent-Child Relationships and Sequencing](#parent-child-relationships-and-sequencing) + - [Repository-scoped records](#repository-scoped-records) + - [Parent/child relationships](#parentchild-relationships) + - [Sequencing](#sequencing) +- [Boolean fields](#boolean-fields) +- [Read-Only Fields](#read-only-fields) + +One way to get a view of all tables and columns in your ArchivesSpace instance is to run the following query in a MySQL client: + +```sql +SELECT TABLE_SCHEMA + , TABLE_NAME + , COLUMN_NAME + , ORDINAL_POSITION + , IS_NULLABLE + , COLUMN_TYPE + , COLUMN_KEY +FROM INFORMATION_SCHEMA.COLUMNS +#change the following value to whatever your database is named +WHERE TABLE_SCHEMA Like 'archivesspace' +``` + +Additionally, a BETA version of an [ArchivesSpace data dictionary](https://github.com/archivesspace/data-dictionary-initial) has been created by members of the ArchivesSpace development team and the ArchivesSpace User Advisory Council Reports team. + +## Main record tables + +These tables hold data about the primary record types in ArchivesSpace. Main record types are distinguished from subrecords in that they have their own persistent URIs - corresponding to their database identifiers/primary keys - that are resolvable via the staff interface, public interface, and API. They are distinguished from supporting records in that they are the primary descriptive record types that users will interact with in the system. + +All of these records, except archival objects, can be created independently of any other record. Archival object records represent components of a larger entity, and so they must have a resource record as a root parent. See the [parent/child relationships](#parent-child-relationships-and-sequencing) section for more information about the representation of hierarchical relationships in the database. + +A few common fields occur in several main record tables. These similar fields are defined by the parent schemas in the `common/schemas` directory: + +| Column Name | Tables | +| ----------------------------------------------- | ---------------------------------------------------------------------------------------- | +| `title` | `accession`, `archival_object`, `digital_object`, `digital_object_component`, `resource` | +| `identifier`/`component_id`/`digital_object_id` | `accession`, `resource`/`archival_object`, `digital_object_component`/`digital_object` | +| `other_level` | `archival_object`, `resource` | +| `repository_processing_note` | `archival_object`, `resource` | + +<!-- Booleans --> + +All of the main records have a set of fields which store boolean values (`0` or `1`) that indicate whether the records are published in the public user interface, suppressed in the staff interface, or have some kind of applicable restriction. The exception to this is the `repository` table, which does not have a restriction boolean, but does have a `hidden` boolean. The `accession` table has multiple restriction-related booleans. See the section below for more information about boolean fields. + +Beginning in version 2.6.0, the main record tables (and some supporting records - see below) also contain fields which hold data about archival resource keys (ARKs) and human-readable URLs (slugs): + +| Column Name | Tables | +| ------------------ | ------------------------------------------------------------------------------------------------------ | +| `slug` | `accession`, `archival_object`, `digital_object`, `digital_object_component`, `repository`, `resource` | +| `external_ark_url` | `archival_object`, `resource` | + +Also stored in these and all other tables are enumeration values, foreign keys which correspond to database identifiers in the `enumeration_value` table, which stores controlled values. See enumeration section below for more detail. + +All subrecord data types - i.e. dates, extents, instances - relating to a main or supporting record are stored in their own tables and linked to main or supporting records via foreign key references in the subrecord tables. See subrecord section below for more detail. + +The remaining data in the main record tables is text, and is unique to each table: + +| TABLE_NAME | COLUMN_NAME | IS_NULLABLE | COLUMN_TYPE | COLUMN_KEY | +| -------------------------- | ------------------------------- | ----------- | ------------ | ---------- | +| `accession` | `content_description` | YES | text | | +| `accession` | `condition_description` | YES | text | | +| `accession` | `disposition` | YES | text | | +| `accession` | `inventory` | YES | text | | +| `accession` | `provenance` | YES | text | | +| `accession` | `general_note` | YES | text | | +| `accession` | `accession_date` | YES | date | | +| `accession` | `retention_rule` | YES | text | | +| `accession` | `access_restrictions_note` | YES | text | | +| `accession` | `use_restrictions_note` | YES | text | | +| `archival_object` | `ref_id` | NO | varchar(255) | MUL | +| `digital_object_component` | `label` | YES | varchar(255) | | +| `repository` | `repo_code` | NO | varchar(255) | UNI | +| `repository` | `name` | NO | varchar(255) | | +| `repository` | `org_code` | YES | varchar(255) | | +| `repository` | `parent_institution_name` | YES | varchar(255) | | +| `repository` | `url` | YES | varchar(255) | | +| `repository` | `image_url` | YES | varchar(255) | | +| `repository` | `contact_persons` | YES | text | | +| `repository` | `description` | YES | text | | +| `repository` | `oai_is_disabled` | YES | int | | +| `repository` | `oai_sets_available` | YES | text | | +| `resource` | `ead_id` | YES | varchar(255) | | +| `resource` | `ead_location` | YES | varchar(255) | | +| `resource` | `finding_aid_title` | YES | text | | +| `resource` | `finding_aid_filing_title` | YES | text | | +| `resource` | `finding_aid_date` | YES | varchar(255) | | +| `resource` | `finding_aid_author` | YES | text | | +| `resource` | `finding_aid_language_note` | YES | varchar(255) | | +| `resource` | `finding_aid_sponsor` | YES | text | | +| `resource` | `finding_aid_edition_statement` | YES | text | | +| `resource` | `finding_aid_series_statement` | YES | text | | +| `resource` | `finding_aid_note` | YES | text | | +| `resource` | `finding_aid_subtitle` | YES | text | | + +<!-- arguably top contsainers should be here, or digital objects should be in the supporting records --> + +## Supporting record tables + +Like the main record types listed above, supporting records can also be created independently of other records, and are addressable in the staff interface and API via their own URI. However, they are primarily meaningful via their many-to-many linkages to the main record types (and, sometimes, other supporting record types). These records typically provide additional information about, or otherwise enhance, the primary record types. A few supporting record types - for instance those in the `term` table - are used to enhance other supporting record types. + +| Supporting module tables | Linked to | +| --------------------------------- | --------------------------------------------------- | +| `agent_corporate_entity` | +| `agent_family` | +| `agent_person` | +| `agent_software` | +| `assessment` | +| `classification` | `accession`, `resource` | +| `classification_term` | `classification`, `accession`, `resource` | +| `container_profile` | `top_container` | +| `event` | +| `location` | +| `location_profile` | `location` | +| `subject` | `resource`, `archival_object` | +| `term` | `subject` | +| `top_container` | +| `vocabulary` | `subject`, `term` | +| `assessment_attribute_definition` | `assessment_attribute`, `assessment_attribute_note` | + +<!-- is this the appropriate place for the assessment attribute def? Vocabulary? --> + +## Subrecord tables + +<!-- link to ### Nested records section of the backend readme --> + +Subrecords must be associated with a main or supporting record - they cannot be created independently. As such, they do not have their own URIs, and can only be accessed via the API by retrieving the top-level record with which they are associated. In the staff interface these records are embedded within main or supporting record views. In the API subrecord data is contained in arrays within main or supporting records. + +The various subrecord types do have their own database tables. In addition to data specific to the subrecord type, the tables also contain foreign key columns which hold the database identifiers of main or supporting records. Subrecord tables must have a value in one of the foreign key fields. Some subrecords can have another subrecord as parent (for instance, the `sub_container` subrecord has `instance_id` as its foreign key column). + +Subrecords exist in a one-to-many relationship with their parent records, so a record's `id` may appear multiple times in a subrecord table (i.e. when there are two dates associated with a resource record). + +It is important to note that subrecords are deleted and recreated upon each save of the main or supporting record with which they are associated, regardless of whether the subrecord itself is modified. This means that the database identifier is deleted and reassigned upon each save. + +| Subrecord tables | Foreign keys | +| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `agent_contact` | `agent_person_id`, `agent_family_id`, `agent_corporate_entity_id`, `agent_software_id` | +| `date` | `accession_id`, `deaccession_id`, `archival_object_id`, `resource_id`, `event_id`, `digital_object_id`, `digital_object_component_id`, `related_agents_rlshp_id`, `agent_person_id`, `agent_family_id`, `agent_corporate_entity_id`, `agent_software_id`, `name_person_id`, `name_family_id`, `name_corporate_entity_id`, `name_software_id` | +| `extent` | `accession_id`, `deaccession_id`, `archival_object_id`, `resource_id`, `digital_object_id`, `digital_object_component_id` | +| `external_document` | `accession_id`, `archival_object_id`, `resource_id`, `subject_id`, `agent_person_id`, `agent_family_id`, `agent_corporate_entity_id`, `agent_software_id`, `rights_statement_id`, `digital_object_id`, `digital_object_component_id`, `event_id` | +| `external_id` | `subject_id`, `accession_id`, `archival_object_id`, `collection_management_id`, `digital_object_id`, `digital_object_component_id`, `event_id`, `location_id`, `resource_id` | +| `file_version` | `digital_object_id`, `digital_object_component_id` | +| `instance` | `resource_id`, `archival_object_id`, `accession_id` | +| `name_authority_id` | `name_person_id`, `name_family_id`, `name_software_id`, `name_corporate_entity_id` | +| `name_corporate_entity` | `agent_corporate_entity_id` | +| `name_family` | `agent_family_id` | +| `name_person` | `agent_person_id` | +| `name_software` | `agent_software_id` | +| `note` | `resource_id`, `archival_object_id`, `digital_object_id`, `digital_object_component_id`, `agent_person_id`, `agent_corporate_entity_id`, `agent_family_id`, `agent_software_id`, `rights_statement_act_id`, `rights_statement_id` | +| `note_persistent_id` | `note_id`, `parent_id` | +| `revision_statement` | `resource_id` | +| `rights_restriction` | `resource_id`, `archival_object_id` | +| `rights_restriction_type` | `rights_restriction_id` | +| `rights_statement` | `accession_id`, `archival_object_id`, `resource_id`, `digital_object_id`, `digital_object_component_id`, `repo_id` | +| `rights_statement_act` | `rights_statement_id` | +| `sub_container` | `instance_id` | +| `telephone` | `agent_contact_id` | +| `user_defined` | `accession_id`, `resource_id`, `digital_object_id` | +| `ark_name` | `archival_object_id`, `resource_id` | +| `assessment_attribute_note` | `assessment_id` | +| `assessment_attribute` | `assessment_id` | +| `lang_material` | `archival_object_id`, `resource_id`, `digital_object_id`, `digital_object_component_id` | +| `language_and_script` | `lang_material_id` | +| `collection_management` | `accession_id`, `resource_id`, `digital_object_id` | +| `location_function` | `location_id` | + +<!-- appropriate place for collection management and deaccession stuff? what about location function? all the rights statement stuff? Is there a specific thing that defines a subrecord as a subrecord? --> + +## Relationship tables + +These tables exist to enable linking between main records and supporting records. Relationship tables are necessary because, unlike subrecord tables, supporting record tables do not include foreign keys which link them to the main record tables. + +Most relationship tables have the `_rlshp` suffix in their names. They typically contain just the primary keys for the tables that are being linked, though a few tables also include fields that are specific to the relationship between the two record types. + +| Relationship/linking tables | Tables linked | +| ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `assessment_reviewer_rlshp` | `assessment` to `agent_person` | +| `assessment_rlshp` | `assessment` to `accession`, `archival_object`, `resource`, or `digital_object` | +| `classification_creator_rlshp` | `classification` to `agent_person`, `agent_family`, `agent_corporate_entity`, or `agent_software` | +| `classification_rlshp` | `classification` or `classification_term` to `resource` or `accession` | +| `classification_term_creator_rlshp` | `classification_term` to `agent_person`, `agent_family`, `agent_corporate_entity`, or `agent_software` | +| `event_link_rlshp` | `event` to `accession`, `resource`, `archival_object`, `digital_object`, `digital_object_component`, `agent_person`, `agent_family`, `agent_corporate_entity`, `agent_software`, or `top_container`. Also includes the `role_id` table, which can be joined with the `enumeration_value` table to return the event role (source, outcome, transfer, context) | +| `instance_do_link_rlshp` | `digital_object` to `instance` | +| `linked_agents_rlshp` | `agent_person`, `agent_software`, `agent_family`, or `agent_corporate_entity` to `accession`, `archival_object`, `digital_object`, `digital_object_component`, `event`, or `resource`. Also includes the `role_id` and `relator_id` tables, which can be joined with the `enumeration_value` table | +| `location_profile_rlshp` | `location` to `location_profile` | +| `owner_repo_rlshp` | `location` to `repository` | +| `related_accession_rlshp` | Links a row in the `accession` table to another row in the `accession` table. Also includes fields for `relator` and relationship type. | +| `related_agents_rlshp` | `agent_person`, `agent_corporate_entity`, `agent_software`, or `agent_family` to other agent tables, or two rows in the same agent table. Also includes fields for `relator` and `description`, and the type of relationship. | +| `spawned_rlshp` | `accession` to `resource`. This contains all linked accession data, even if the resource was not spawned from the accession record. | +| `subject_rlshp` | `subject` to `accession`, `archival_object`, `resource`, `digital_object`, or `digital_object_component` | +| `surveyed_by_rlshp` | `assessment` to `agent_person` | +| `top_container_housed_at_rlshp` | `top_container` to `location`. Also includes fields for `start_date`, `end_date`, `status`, and a free-text `note`. | +| `top_container_link_rlshp` | `top_container` to `sub_container` | +| `top_container_profile_rlshp` | `top_container` to `container_profile` | +| `subject_term` | `subject` to `term` | +| `linked_agent_term` | `linked_agents_rlshp` to `term` | + +<!-- is the assessment definition thing a linking table - it pretty much only has foreign keys + +Same question about one of the rights restriction tables - can't remember which one right now. + --> + +It is not always obvious which relationship tables will provide the desired results. For instance, to get a box list for a given resource record, enter the following query into a MySQL editor: + +```sql +SELECT DISTINCT CONCAT('/repositories/', resource.repo_id, '/resources/', resource.id) as resource_uri + , resource.identifier + , resource.title + , tc.barcode as barcode + , tc.indicator as box_number +FROM sub_container sc +JOIN top_container_link_rlshp tclr on tclr.sub_container_id = sc.id +JOIN top_container tc on tclr.top_container_id = tc.id +JOIN instance on sc.instance_id = instance.id +JOIN archival_object ao on instance.archival_object_id = ao.id +JOIN resource on ao.root_record_id = resource.id +#change to your desired resource id +WHERE resource.id = 4556 +``` + +Sometimes numerous relationship tables must be joined to retrieve the desired results. For instance, to get all boxes and folders for a given resource record, including any container profiles and locations, enter the following query into a MySQL editor: + +```sql +SELECT CONCAT('/repositories/', tc.repo_id, '/top_containers/', tc.id) as tc_uri + , CONCAT('/repositories/', resource.repo_id, '/resources/', resource.id) as resource_uri + , CONCAT('/repositories/', resource.repo_id) as repo_uri + , CONCAT('/repositories/', ao.repo_id, '/archival_objects/', ao.id) as ao_uri + , resource.identifier AS resource_identifier + , resource.title AS resource_title + , ao.display_string AS ao_title + , ev2.value AS level + , tc.barcode AS barcode + , cp.name AS container_profile + , tc.indicator AS container_num + , ev.value AS sc_type + , sc.indicator_2 AS sc_num +from sub_container sc +JOIN top_container_link_rlshp tclr on tclr.sub_container_id = sc.id +JOIN top_container tc on tclr.top_container_id = tc.id +LEFT JOIN top_container_profile_rlshp tcpr on tcpr.top_container_id = tc.id +LEFT JOIN container_profile cp on cp.id = tcpr.container_profile_id +LEFT JOIN top_container_housed_at_rlshp tchar on tchar.top_container_id = tc.id +JOIN instance on sc.instance_id = instance.id +JOIN archival_object ao on instance.archival_object_id = ao.id +JOIN resource on ao.root_record_id = resource.id +LEFT JOIN enumeration_value ev on ev.id = sc.type_2_id +LEFT JOIN enumeration_value ev2 on ev2.id = ao.level_id +#change to your desired resource id +WHERE resource.id = 4223 + +``` + + <!-- Mention the CONCAT function for creating URIs --> + +## Enumerations + +All controlled values used by the application - excluding tool-tips and frontend/public display values and the values that are stored a few of the supporting record tables (see below) - are stored in a table called `enumeration_values`. Controlled values are organized into a variety of parent enumerations (akin to a set of distinct controlled value lists) which are utilized by different record and subrecord types. Parent enumeration data is stored in the `enumeration` table and is linked by foreign key in the `enumeration_id` field in the `enumeration_value` table. In the record and subrecord tables, enumeration values appear as foreign keys in a variety of foreign key columns, usually identified by an `_id` suffix. + +ArchivesSpace comes with a standard set of controlled values, but most of these are modifiable by end-users via the staff interface and API. However, some values in the `enumeration_value` table are read-only - these values define the terminology and data types used in different parts of the application (i.e. the various note types). + +Enumeration IDs appear as foreign keys in a variety of database tables: + +| table_name | column_name | enumeration_name | +| -------------------------- | ---------------------------------- | -------------------------------------------------- | +| `accession` | `acquisition_type_id` | accession_acquisition_type | +| `accession` | `resource_type_id` | accession_resource_type | +| `agent_contact` | `salutation_id` | agent_contact_salutation | +| `archival_object` | `level_id` | archival_record_level | +| `collection_management` | `processing_priority_id` | collection_management_processing_priority | +| `collection_management` | `processing_status_id` | collection_management_processing_status | +| `collection_management` | `processing_total_extent_type_id` | extent_extent_type_id | +| `container_profile` | `dimension_units_id` | dimension_units | +| `date` | `calendar_id` | date_calendar | +| `date` | `certainty_id` | date_certainty | +| `date` | `date_type_id` | date_type | +| `date` | `era_id` | date_era | +| `date` | `label_id` | date_label | +| `deaccession` | `scope_id` | deaccession_scope | +| `digital_object` | `digital_oject_type_id` | digital_object_digital_object_type | +| `digital_object` | `level_id` | digital_object_level | +| `event` | `event_type_id` | event_event_type | +| `event` | `outcome_id` | event_outcome | +| `extent` | `extent_type_id` | extent_extent_type | +| `extent` | `portion_id` | extent_portion | +| `external_document` | `identifier_type_id` | rights_statement_external_document_identifier_type | +| `file_version` | `checksum_method_id` | file_version_checksum_methods | +| `file_version` | `file_format_name_id` | file_version_file_format_name | +| `file_version` | `use_statement_id` | file_version_use_statement | +| `file_version` | `xlink_actuate_attribute_id` | file_version_xlink_actuate_attribute | +| `file_version` | `xlink_show_attribute_id` | file_version_xlink_show_attribute | +| `instance` | `instance_type_id` | instance_instance_type | +| `language_and_script` | `language_id` | +| `language_and_script` | `script_id` | +| `location` | `temporary_id` | location_temporary | +| `location_function` | `location_function_type_id` | location_function_type | +| `location_profile` | `dimension_units_id` | dimension_units | +| `name_corporate_entity` | `rules_id` | name_rule | +| `name_corporate_entity` | `source_id` | name_source | +| `name_family` | `rules_id` | name_rule | +| `name_family` | `source_id` | name_source | +| `name_person` | `name_order_id` | name_person_name_order | +| `name_person` | `rules_id` | name_rule | +| `name_person` | `source_id` | name_source | +| `name_software` | `rules_id` | name_rule | +| `name_software` | `source_id` | name_source | +| `repository` | `country_id` | country_iso_3166 | +| `resource` | `finding_aid_description_rules_id` | resource_finding_aid_description_rules | +| `resource` | `finding_aid_language_id` | +| `resource` | `finding_aid_script_id` | +| `resource` | `finding_aid_status_id` | resource_finding_aid_status | +| `resource` | `level_id` | archival_record_level | +| `resource` | `resource_type_id` | resource_resource_type | +| `rights_restriction_type` | `restriction_type_id` | restriction_type | +| `rights_statement` | `jurisdiction_id` | +| `rights_statement` | `other_rights_basis_id` | rights_statement_other_rights_basis | +| `rights_statement` | `rights_type_id` | rights_statement_rights_type | +| `rights_statement` | `status_id` | +| `rights_statement_act` | `act_type_id` | rights_statement_act_type | +| `rights_statement_act` | `restriction_id` | rights_statement_act_restriction | +| `rights_statement_pre_088` | `ip_status_id` | rights_statement_ip_status | +| `rights_statement_pre_088` | `jurisdiction_id` | +| `rights_statement_pre_088` | `rights_type_id` | rights_statement_rights_type | +| `sub_container` | `type_2_id` | container_type | +| `sub_container` | `type_3_id` | container_type | +| `subject` | `source_id` | subject_source | +| `telephone` | `number_type_id` | telephone_number_type | +| `term` | `term_type_id` | subject_term_type | +| `top_container` | `type_id` | container_type | + +<!-- need to add some rlshp tables which have enums --> + +To translate the enumeration ID that appears in the record and subrecord tables, join the `enumeration_value` table. The table can be joined multiple times if there are multiple values to translate, but you must use an alias for each table. For example: + +```sql +SELECT CONCAT('/repositories/', ao.repo_id, '/archival_objects/', ao.id) as ao_uri + , ao.display_string as ao_title + , date.begin + , date.end + , ev.value as date_label + , ev2.value as date_type + , ev3.value as date_calendar +FROM archival_object ao +LEFT JOIN date on date.archival_object_id = ao.id +LEFT JOIN enumeration_value ev on ev.id = date.label_id +LEFT JOIN enumeration_value ev2 on ev2.id = date.date_type_id +LEFT JOIN enumeration_value ev3 on ev3.id = date.calendar_id +``` + +**NOTE**: `container_profile`, `location_profile`, and `assessment_attribute_definition` records are similar to the records in the `enumeration_value` table in that they store controlled values which are referenced by other parts of the system. However, they differ in that they have their own tables and are addressable via their own URIs. + +## User, setting, and permission tables + +These tables store user and permissions information, user/repository/global preferences, and RDE and custom report templates. + +| Table name | Description | +| ------------------------ | ------------------------------------------------------- | +| `custom_report_template` | Custom report templates | +| `default_values` | Default values settings | +| `group` | Data about permission groups created by each repository | +| `group_permission` | Links the permission table to the group table | +| `group_user` | Links the group table to the user table | +| `oai_config` | Configuration data for OAI-PMH harvesting | +| `permission` | All permission types that can be assigned to users | +| `preference` | User preference data | +| `rde_template` | RDE templates | +| `required_fields` | Contains repository-defined required fields | +| `user` | User data | + +## Job tables + +These tables store data related to background jobs, including imports. + +| Table name | Description | +| --------------------- | ---------------------------------------------------------- | +| `job` | All jobs which have been run in an ArchivesSpace instance. | +| `job_created_record` | Records created via background jobs | +| `job_input_file` | Data about input files used in background jobs | +| `job_modified_record` | Data about records modified via background jobs | + +## System tables + +These tables track actions taken against the database (i.e. edits and deletes), system events, session and authorization data, and database information. These tables are typically not referenced by any other table. + +| Table name | Description | +| ----------------- | --------------------------------------------------------------------------------------------------- | +| `active_edit` | Records being actively edited by a user. Read-only system table | +| `auth_db` | Authentication data for users. Read-only system table | +| `deleted_records` | Records deleted in the past 24 hours. Read-only system table | +| `notification` | Notifications stream. Read-only system table | +| `schema_info` | Contains the database schema version. Read-only system table. | +| `sequence` | The value corresponds to the number of children the archival object has - 1. Read-only system table | +| `session` | Recent session data. Read-only system table | +| `system_event` | System event data. Read-only system table | + +<!-- these are subrecords --> +<!-- | subnote_metadata | +| rights_statement_pre_088 | --> + +## Parent-Child Relationships and Sequencing + +### Repository-scoped records + +Many main and supporting records are scoped to a particular repository. In these tables the parent repository is identified by a foreign key which corresponds to the database identifier in the `repository` table: + +| Column name | Description | Example | Found in | +| ----------- | ---------------------------------------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `repo_id` | The database ID of the parent repository | `12` | `accession`, `archival_object`, `assessment`, `assessment_attribute_definition`, `classification`, `classification_term`, `custom_report_template`, `default_values`, `digital_object`, `digital_object_component`, `event`, `group`, `job`, `preference`, `required_fields`, `resource`, `rights_statement`, `top_container` | + +### Parent/child relationships + +Hierarchical relationships between other records are also expressed through foreign keys: + +| Column name | Description | Example | PK Tables | Found in | +| ---------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ | +| `root_record_id` | The database ID of the root parent record | `4566` | `resource`, `digital_object`, `classification` | `archival_object`, `digital_object_component`, `classification_term` | +| `parent_id` | The database ID of the immediate parent record. This is used to identify parent records which are of the same type as the child record (i.e. two archival object records). The value will be NULL if the only parent is the root record. | `1748121` | `archival_object`, `classification_term`, `digital_object_component` | `archival_object`, `classification_term`, `digital_object_component`, `note_persistent_id` | +| `parent_name` | The database ID or URI, and the record type of the immediate parent | `144@archival_object`, `root@/repositories/2/resources/2` | `resource`, `archival_object`, `classification`, `classification_term`, `digital_object`, `digital_object_component` | `archival_object`, `classification_term`, `digital_object_component` | + +Beginning with MySQL 8, you can recursively retrieve all parents of an archival object (or all archival objects linked to a resource) by running the following query: + +```sql +WITH RECURSIVE ao_path AS + (SELECT ao1.id + , ao1.display_string + , ao1.component_id + , ao1.parent_id + , ev.value as `ao_level` + , 1 as level + FROM archival_object ao1 + LEFT JOIN enumeration_value ev on ev.id = ao1.level_id + WHERE ao1.id = <your ao id> + <!-- to get all trees for a resource change to: WHERE ao1.root_record_id = <your root_record_id> --> + UNION ALL + SELECT ao2.id + , ao2.display_string + , ao2.component_id + , ao2.parent_id + , ev.value as `ao_level` + , ao_path.level + 1 as level + FROM ao_path + JOIN archival_object ao2 on ao_path.parent_id = ao2.id + LEFT JOIN enumeration_value ev on ev.id = ao2.level_id) + SELECT GROUP_CONCAT(CONCAT(display_string, ' ', ' (', CONCAT(UPPER(SUBSTRING(ao_level,1,1)),LOWER(SUBSTRING(ao_level,2))), ' ', IF(component_id is not NULL, CAST(component_id as CHAR), "N/A"), ')') ORDER BY level DESC SEPARATOR ' > ') as tree + FROM ao_path; + +``` + +To retrieve all children (MySQL 8+): + +To retrieve both parents and children (MySQL 8+): + +To retrieve all parents of a record in MySQL 5.7 and below, run the following query: + +```sql +SELECT (SELECT GROUP_CONCAT(CONCAT(display_string, ' (', ao_level, ')') SEPARATOR ' < ') as parent_path + FROM (SELECT T2.display_string as display_string + , ev.value as ao_level + FROM (SELECT @r AS _id + , @p := @r AS previous + , (SELECT @r := parent_id FROM archival_object WHERE id = _id) AS parent_id + , @l := @l + 1 AS lvl + FROM ((SELECT @r := 1749840, @p := 0, @l := 0) AS vars, + archival_object h) + WHERE @r <> 0 AND @r <> @p) AS T1 + JOIN archival_object T2 ON T1._id = T2.id + LEFT JOIN enumeration_value ev on ev.id = T2.level_id + WHERE T2.id != 1749840 + ORDER BY T1.lvl DESC) as all_parents) as p_path + , ao.display_string + , CONCAT('/repositories/', ao.repo_id, '/archival_objects/', ao.id) as uri +FROM archival_object ao +WHERE ao.id = 1749840 +``` + +To retrieve all children of a record (MysQL 5.7 and below): + +```sql + +``` + +### Sequencing + +The ordering of records in a `resource`, `classification`, or `digital_object` tree is determined by the `position` field. The position field is also used to order values in the `enumeration_value` and `assessment_attribute_definition` tables: + +| Column name | Description | Example | Found in | +| ----------- | -------------------------------------------------------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------- | +| `position` | The position of the archival object under the immediate parent | `168000` | `enumeration_value`, `assessment_attribute_definition`, `classification_term`, `digital_object_component`, `archival_object` | + +## Boolean fields + +Many records and subrecords include fields which contain integers (`0` or `1`) corresponding to boolean values. + +| Boolean fields | Description | Found in | +| -------------------- | ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `publish` | | `subnote_metadata`, `file_version`, `external_document`, `accession`, `classification`, `agent_person`, `agent_family`, `agent_software`, `agent_corporate_entity`, `classification_term`, `revision_statement`, `repository`, `note`, `digital_object`, `digital_object_component`, `archival_object`, `resource` | +| `suppressed` | | `accession`, `archival_object`, `assessment_reviewer_rlshp`, `assessment_rlshp`, `classification`, `classification_creator_rlshp`, `classification_rlshp`, `classification_term`, `classification_term_creator_rlshp`, `digital_object`, `digital_object_component`, `enumeration_value`, `event`, `event_link_rlshp`, `instance_do_link_rlshp`, `linked_agents_rlshp`, `location_profile_rlshp`, `owner_repo_rlshp`, `related_accession_rlshp`, `related_agents_rlshp`, `resource`, `spawned_rlshp`, `surveyed_by_rlshp`, `top_container_housed_at_rlshp`, `top_container_link_rlshp`, `top_container_profile_rlshp` | +| `restrictions_apply` | | `accession`, `archival_object` | + +<!-- NEED TO ADD the restriction field here - the resource and dig ob recs have it --> +<!-- also add the hidden field in repo and the multiple restrictions in accession --> +<!-- I think this is good to mention because these are editable via the API but also have their own endpoints. So they are a little different. Should also mention that they are bools in the API docs. --> + +## Read-Only Fields + +Several system generated, read-only fields appear across many tables. These include database identifiers, timestamps that track record creation and modification, and fields that record the username of the user that created and last modified the each record. + +| Most common read-only fields | Description | +| ------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `id` (primary key) | Database identifier for each record | +| `system_mtime` | The last time the record was modified by the system | +| `created_by` | The user that created a record | +| `last_modified_by` | The user that last modified a record | +| `user_mtime` | The time that a record was last modified by a user | +| `create_time` | The time that a record was created | +| `lock_version` | This field is incrementally updated each time a record is updated. This provides a method of tracking updates and managing near-simultaneous edits by different users. | +| `json_schema_version` | The JSON schema version | +| `aspace_relationship_position` | The position of a linked record in a list of other linked records | +| `is_slug_auto` | A boolean value that indicates whether a slug was auto-generated | +| `system_generated` | A boolean value that indicates whether a field was system-generated | +| `display_string` | A system-generated field which concatenates the title and date fields of an archival object record | + +**NOTE**: for subrecord tables these fields may hold unexpected data - because subrecords are deleted and recreated upon each save of a main or supporting record, their create and modification times will also be recreated and will not reflect the original creation date of the subrecord itself. For resource records, the timestamp only records the time that the resource itself was modified, not the last time any of its components were modified. + +<!-- ## Querying the ArchivesSpace Database --> diff --git a/src/content/docs/de/architecture/directories.md b/src/content/docs/de/architecture/directories.md new file mode 100644 index 0000000..8d1c026 --- /dev/null +++ b/src/content/docs/de/architecture/directories.md @@ -0,0 +1,90 @@ +--- +title: Directory structure +description: Provides short summaries of the different directories in the ArchivesSpace codebase. +--- + +ArchivesSpace is made up of several components that are kept in separate directories. + +## \_yard + +This directory contains the code for the documentation tool used to generate the github io pages here: http://archivesspace.github.io/archivesspace/ + +## backend + +This directory contains the code that handles the database and the API. + +## build + +This directory contains the code used to build the application. It includes the commands that are used to run the development servers, the test suites, and to build the releases. ArchivesSpace is a JRuby application and Apache Ant is used to build it. + +## clustering + +This directory contains code that can be used when clustering an ArchivesSpace installation. + +## common + +This directory contains code that is used across two or more of the components. It includes configuration options, database schemas and migrations, and translation files. + +## contribution_files + +This directory contains the documentation and PDFs of the license agreement files. + +## docs + +This directory contains documentation files that are included in a release. + +## frontend + +This directory contains the staff interface Ruby on Rails application. + +## indexer + +This directory contains the indexer Sinatra application. + +## jmeter + +This directory contains an example that can be used to set up Apache JMeter to load test functional behavior and measure performance. + +## launcher + +This directory contains the code that launches (starts, restarts, and stops) an ArchivesSpace application. + +## oai + +This directory contains the OAI-PMH Sinatra application. + +## plugins + +This directory contains ArchivesSpace Program Team supported plugins. + +## proxy + +This directory contains the Docker proxy code. + +## public + +This directory contains the public interface Ruby on Rails application. + +## reports + +This directory contains the reports code. + +## scripts + +This directory contains scripts necessary for building, deploying, and other ArchivesSpace tasks. + +## selenium + +This directory contains the selenium tests. + +## solr + +This directory contains the solr code. + +## stylesheets + +This directory contains XSL stylesheets used by ArchivesSpace. + +## supervisord + +This directory contains a tool that can be used to run the development servers. diff --git a/src/content/docs/de/architecture/frontend.md b/src/content/docs/de/architecture/frontend.md new file mode 100644 index 0000000..50e9665 --- /dev/null +++ b/src/content/docs/de/architecture/frontend.md @@ -0,0 +1,7 @@ +--- +title: Staff interface +--- + +This document provides an overview of the parts of the ArchivesSpace codebase which control the frontend/staff interface. For guidance on using the ArchivesSpace staff interface, consult the [ArchivesSpace Help Center](https://archivesspace.atlassian.net/wiki/spaces/ArchivesSpaceUserManual/overview) (ArchivesSpace members only). + +> Additional documentation needed diff --git a/src/content/docs/de/architecture/index.md b/src/content/docs/de/architecture/index.md new file mode 100644 index 0000000..786335d --- /dev/null +++ b/src/content/docs/de/architecture/index.md @@ -0,0 +1,25 @@ +--- +title: Architecture and components +description: Abbreviated description of how the different parts of ArchivesSpace interact with each other with links to each section. +--- + +ArchivesSpace is divided into several components: the backend, which +exposes the major workflows and data types of the system via a +REST API, a staff interface, a public interface, and a search system, +consisting of Solr and an indexer application. + +These components interact by exchanging JSON data. The format of this +data is defined by a class called JSONModel. + +- [Overview](./overview) +- [JSONModel -- a validated ArchivesSpace record](./jsonmodel) +- [The ArchivesSpace backend](./backend) +- [The ArchivesSpace staff interface](./frontend) +- [Background Jobs](./jobs) +- [Search indexing](./search) +- [The ArchivesSpace public user interface](./public) +- [OAI-PMH interface](./oai-pmh) +- [API](./api) +- [Database](./database) +- [Directory structure](./directories) +- [Dependencies](./languages) diff --git a/src/content/docs/de/architecture/jobs.md b/src/content/docs/de/architecture/jobs.md new file mode 100644 index 0000000..5e2ef01 --- /dev/null +++ b/src/content/docs/de/architecture/jobs.md @@ -0,0 +1,118 @@ +--- +title: Background jobs +description: Describes long running processes, called background jobs, in ArchivesSpace, as well as how they are structured using types, runners, and schemas. Additional guidance on setting jobs to run concurrently and how to add a new job type using a plugin. +--- + +ArchivesSpace provides a mechanism for long-running processes to run +asynchronously. These processes are called `Background Jobs`. + +## Managing Jobs in the Staff UI + +The `Create` menu has a `Background Job` option which shows a submenu of job +types that the current user has permission to create. (See below for more +information about job permissions and hidden jobs.) Selecting one of these +options will take the user to a form to enter any parameters required for the +job and then to create it. + +When a job is created it is placed in the `Background Job Queue`. Jobs in the +queue will be run in the order they were created. (See below for more +information about multiple threads and concurrent jobs.) + +The `Browse` menu has a `Background Jobs` option. This takes the user to a list +of jobs arranged by their status. The user can then view the details of a job, +and cancel it if they have permission. + +## Permissions + +A user must have the `create_job` permission to create a job. By default, this +permission is included in the `repository_basic_data_entry` group. + +A user must have the `cancel_job` permission to cancel a job. By default, this +permission is included in the `repository_managers` group. + +When a JobRunner registers it can specify additional create and cancel +permissions. (See below for more information) + +## Types, Runners and Schemas + +Each job has a type, and each type has a registered runner to run jobs of that +type and JSONModel schema to define its parameters. + +#### Registered JobRunners + +All jobs of a type are handled by a registered `JobRunner`. The job runner +classes are located here: + +``` +backend/app/lib/job_runners/ +``` + +It is possible to define additional job runners from a plugin. (See below for +more information about plugins.) + +A job runner class must subclass `JobRunner`, register to run one or more job +types, and implement a `#run` method for jobs that it handles. + +When a job runner registers for a job type, it can set some options: + +- `:hidden` + - Defaults to `false` + - If this is set then this job type will not be shown in the list of available job types. +- `:run_concurrently` + - Defaults to `false` + - If this is set to true then more than one job of this type can run at the same time. +- `:create_permissions` + - Defaults to `[]` + - A permission or list of permissions required, in addition to `create_job`, to create jobs of this type. +- `:cancel_permissions` + - Defaults to `[]` + - A permission or list of permissions required, in addition to `cancel_job`, to cancel jobs of this type. + +For more information about defining a job runner, see the `JobRunner` superclass: + +``` +backend/app/lib/job_runner.rb +``` + +#### JSONModel Schemas + +A job type also requires a JSONModel schema that defines the parameters to run a +job of the type. The schema name must be the same as the type that the runner +registers for. For example: + +``` +common/schemas/import_job.rb +``` + +This schema, for `JSONModel(:import_job)`, defines the parameters for running a +job of type `import_job`. + +## Concurrency + +ArchivesSpace can be configured to run more than one background job at a time. +By default, there will be two threads available to run background jobs. +The configuration looks like this: + +``` +AppConfig[:job_thread_count] = 2 +``` + +The `BackgroundJobQueue` will start this number of threads at start up. Those +threads will then poll for queued jobs and run them. + +When a job runner registers, it can set an option called `:run_concurrently`. +This is `false` by default. When set to `false` a job thread will not run a job +if there is already a job of that type running. The job will remain on the queue +and will be run when there are no longer any jobs of its type running. + +When set to `true` a job will be run when it comes to the front of the queue +regardless of whether there is a job of the same type running. + +## Plugins + +It is possible to add a new job type from a plugin. ArchivesSpace includes a +plugin that demonstrates how to do this: + +``` +plugins/jobs_example +``` diff --git a/src/content/docs/de/architecture/jsonmodel.md b/src/content/docs/de/architecture/jsonmodel.md new file mode 100644 index 0000000..9002c8b --- /dev/null +++ b/src/content/docs/de/architecture/jsonmodel.md @@ -0,0 +1,103 @@ +--- +title: JSONModel +description: Describes the rules and structure behind the JSONModel class, which expresses the rules for different types of archival records. JSONModel instances are the primary data interchange mechanism for ArchivesSpace. +--- + +The ArchivesSpace system is concerned with managing a number of +different archival record types. Each record can be expressed as a +set of nested key/value pairs, and associated with each record type is +a number of rules that describe what it means for a record of that +type to be valid: + +- some fields are mandatory, some optional +- some fields can only take certain types +- some fields can only take values from a constrained set +- some fields are dependent on other fields +- some record types can be nested within other record types +- some record types may be related to others through a hierarchy +- some record types form a relationship graph with other record + types + +The JSONModel class provides a common language for expressing these +rules that all parts of the application can share. There is a +JSONModel class instance for each type of record in the system, so: + +```ruby +JSONModel(:digital_object) +``` + +is a class that knows how to take a hash of properties and make sure +those properties conform to the specification of a Digital Object: + +```ruby +JSONModel(:digital_object).from_hash(myhash) +``` + +If it passes validation, a new JSONModel(:digital_object) instance is +returned, which provides accessors for accessing its values, and +facilities for round-tripping between JSON documents and regular Ruby +hashes: + +```ruby +obj = JSONModel(:digital_object).from_hash(myhash) + +obj.title # or obj['title'] +obj.title = 'a new title' # or obj['title'] = 'a new title' + +obj.\_exceptions # Validates the object and reports any issues + +obj.to_hash # Turn the JSONModel object back into a regular hash +obj.to_json # Serialize the JSONModel object into JSON +``` + +Much of the validation performed by JSONModel is provided by the JSON +schema definitions listed in the `common/schemas` directory. JSON +schemas provide a standard way of declaring which properties a record +may and may not contain, along with their types and other +restrictions. ArchivesSpace uses these schemas to capture the +validation rules defining each record type in a declarative and +relatively self-documenting fashion. + +JSONModel instances are the primary data interchange mechanism for the +ArchivesSpace system: the API consumes and produces JSONModel +instances (in JSON format), and much of the user interface's life is +spent turning forms into JSONModel instances and shipping them off to +the backend. + +## JSONModel::Client -- A high-level API for interacting with the ArchivesSpace backend + +To save the need for a lot of HTTP request wrangling, ArchivesSpace +ships with a module called JSONModel::Client that simplifies the +common CRUD-style operations. Including this module just requires +passing an additional parameter when initializing JSONModel: + +```ruby +JSONModel::init(:client_mode => true, :url => @backend_url) +include JSONModel +``` + +If you'll be working against a single repository, it's convenient to +set it as the default for subsequent actions: + +```ruby +JSONModel.set_repository(123) +``` + +Then, several additional JSONModel methods are available: + +```ruby +# As before, get a paginated list of accessions (GET) +JSONModel(:accession).all(:page => 1) + +# Create a new accession (POST) +obj = JSONModel(:accession).from_hash(:title => "A new accession", ...) +obj.save + +# Get a single accession by ID (GET) +obj = JSONModel(:accession).find(123) + +# Update an existing accession (POST) +obj = JSONModel(:accession).find(123) +obj.title = "Updated title" +obj.save +``` diff --git a/src/content/docs/de/architecture/languages.md b/src/content/docs/de/architecture/languages.md new file mode 100644 index 0000000..e36d138 --- /dev/null +++ b/src/content/docs/de/architecture/languages.md @@ -0,0 +1,18 @@ +--- +title: Dependencies +description: Lists the technical stack of the application, including programming languages and platforms. +--- + +ArchivesSpace components are constructed using several programming languages, platforms, and additional open source projects. + +## Languages + +The languages used are Java, JRuby, Ruby, JavaScript, and CSS. + +## Platforms + +The backend, OAI harvester, and indexer are Sinatra apps. The staff and public user interfaces are Ruby on Rails apps. + +## Additional open source projects + +The database used out of the box and for testing is Apache Derby. The database suggested for production is MySQL. The index platform is Apache Solr. diff --git a/src/content/docs/de/architecture/oai-pmh.md b/src/content/docs/de/architecture/oai-pmh.md new file mode 100644 index 0000000..b538aa3 --- /dev/null +++ b/src/content/docs/de/architecture/oai-pmh.md @@ -0,0 +1,130 @@ +--- +title: OAI-PMH interface +description: Describes how OAI-PMH is set up in ArchivesSpace and how to harvest data using OAI-PMH with example links and additional information. +--- + +A starter OAI-PMH interface for ArchivesSpace allowing other systems to harvest +your records is included in version 2.1.0. Additional features and functionality +will be added in later releases. + +By default, the OAI-PMH interface runs on port 8082. A sample request page is +available at http://localhost:8082/sample. (To access it, make sure that you +have set the AppConfig[:oai_proxy_url] appropriately.) + +The system provides responses to a number of standard OAI-PMH requests, +including GetRecord, Identify, ListIdentifiers, ListMetadataFormats, +ListRecords, and ListSets. Unpublished and suppressed records and elements are +not included in any of the OAI-PMH responses. + +Some responses require the URL parameter metadataPrefix. There are five +different metadata responses available: + +- EAD -- oai_ead (resources in EAD) +- Dublin Core -- oai_dc (archival objects and resources in Dublin Core) +- extended DCMI Terms -- oai_dcterms (archival objects and resources in DCMI Metadata Terms format) +- MARC -- oai_marc (archival objects and resources in MARC) +- MODS -- oai_mods (archival objects and resources in MODS) + +The EAD response for resources and MARC response for resources and archival +objects use the mappings from the built-in exporter for resources. The DC, +DCMI terms, and MODS responses for resources and archival objects use mappings +suggested by the community. + +Here are some example URLs and other information for these requests: + +**GetRecord** – needs a record identifier and metadataPrefix +Up to ArchivesSpace v3.5.1 OAI identifiers are in this format: + +`http://localhost:8082/oai?verb=GetRecord&identifier=oai:archivesspace//repositories/2/resources/138&metadataPrefix=oai_ead` + +Starting with ArchivesSpace v4.0.0 OAI identifiers are in the new format (notice the colon after the `oai:archivesspace` namespace part of the identifier): + +`http://localhost:8082/oai?verb=GetRecord&identifier=oai:archivesspace:/repositories/2/resources/138&metadataPrefix=oai_ead` + +see also: https://github.com/code4lib/ruby-oai/releases/tag/v1.0.0 + +**Identify** + +`http://localhost:8082/oai?verb=Identify` + +**ListIdentifiers** – needs a metadataPrefix + +`http://localhost:8082/oai?verb=ListIdentifiers&metadataPrefix=oai_dc` + +**ListMetadataFormats** + +`http://localhost:8082/oai?verb=ListMetadataFormats` + +**ListRecords** – needs a metadataPrefix + +`http://localhost:8082/oai?verb=ListRecords&metadataPrefix=oai_dcterms` + +**ListSets** + +`http://localhost:8082/oai?verb=ListSets` + +Harvesting the ArchivesSpace OAI-PMH server without specifying a set will yield +all published records across all repositories. +Predefined sets can be accessed using the set parameter. In order to retrieve +records from sets, include a set parameter in the URL and the DC metadataPrefix, +such as "&set=collection&metadataPrefix=oai_dc". These sets can be from +configured sets as shown above or from the following levels of description: + +- Class -- class +- Collection -- collection +- File -- file +- Fonds -- fonds +- Item -- item +- Other_Level -- otherlevel +- Record_Group -- recordgrp +- Series -- series +- Sub-Fonds -- subfonds +- Sub-Group -- subgrp +- Sub-Series -- subseries + +In addition to the sets based on level of description, you can define sets +based on repository codes and/or sponsors in the config/config.rb file: + +```ruby +AppConfig[:oai_sets] = { + 'repository_set' => { + :repo_codes => ['hello626'], + :description => "A set of one or more repositories", + }, + 'sponsor_set' => { + :sponsors => ['The_Sponsor'], + :description => "A set of one or more sponsors", + } +} +``` + +The interface implements resumption tokens for pagination of results. As an +example, the following URL format should be used to page through the results +from a ListRecords request: + +`http://localhost:8082/oai?verb=ListRecords&metadataPrefix=oai_ead` + +using the resumption token: + +`http://localhost:8082/oai?verb=ListRecords&resumptionToken=eyJtZXRhZGF0YV9wcmVmaXgiOiJvYWlfZWFkIiwiZnJvbSI6IjE5NzAtMDEtMDEgMDA6MDA6MDAgVVRDIiwidW50aWwiOiIyMDE3LTA3LTA2IDE3OjEwOjQxIFVUQyIsInN0YXRlIjoicHJvZHVjaW5nX3JlY29yZHMiLCJsYXN0X2RlbGV0ZV9pZCI6MCwicmVtYWluaW5nX3R5cGVzIjp7IlJlc291cmNlIjoxfSwiaXNzdWVfdGltZSI6MTQ5OTM2MTA0Mjc0OX0=` + +Note: you do not use the metadataPrefix when you use the resumptionToken + +The ArchivesSpace OAI-PMH server supports persistent deletes, so harvesters +will be notified of any records that were deleted since +they last harvested. + +Mixed content is removed from Dublin Core, dcterms, MARC, and MODS field outputs +in the OAI-PMH response (e.g., a scope note mapped to a DC description field +would not include `<p>`, `<abbr>`, `<address>`, `<archref>`, `<bibref>`, `<blockquote>`, +`<chronlist>`, `<corpname>`, `<date>`, `<emph>`, `<expan>`, `<extptr>`, `<extref>`, +`<famname>`, `<function>`, `<genreform>`, `<geogname>`, `<lb>`, `<linkgrp>`, `<list>`, +`<name>`, `<note>`, `<num>`, `<occupation>`, `<origination>`, `<persname>`, `<ptr>`, `<ref>`, `<repository>`, `<subject>`, `<table>`, `<title>`, `<unitdate>`, `<unittitle>`). + +The component level records include inherited data from superior hierarchical +levels of the finding aid. Element inheritance is determined by institutional +system configuration (editable in the config/config.rb file) as implemented for +the Public User Interface. + +ARKs have not yet been implemented, pending more discussion of how they should +be formulated. diff --git a/src/content/docs/de/architecture/overview.md b/src/content/docs/de/architecture/overview.md new file mode 100644 index 0000000..b4a7375 --- /dev/null +++ b/src/content/docs/de/architecture/overview.md @@ -0,0 +1,15 @@ +--- +title: Architecture Overview +description: The main components of ArchivesSpace and how they interact with each other and the end users. +--- + +ArchivesSpace is divided into several components: + +- the backend, which exposes the major workflows and data types of the system via a REST API, +- a staff interface, +- a public interface, +- a search system, consisting of Solr and an indexer application. + +These components interact by exchanging JSON data. The format of this data is defined by a class called JSONModel. + +![archivesspace_architecture](./archivesspace_architecture.svg) diff --git a/src/content/docs/de/architecture/public.md b/src/content/docs/de/architecture/public.md new file mode 100644 index 0000000..aa6419d --- /dev/null +++ b/src/content/docs/de/architecture/public.md @@ -0,0 +1,154 @@ +--- +title: Public user interface +description: Directions for configuration options for the ArchivesSpace Public User Interface, as well as explanation on inheritance of data in records. +--- + +The ArchivesSpace Public User Interface (PUI) provides a public +interface to your ArchivesSpace collections. In a default +ArchivesSpace installation it runs on port `:8081`. + +## Configuration + +The PUI is configured using the standard ArchivesSpace `config.rb` +file, with the relevant configuration options are prefixed with +`:pui_`. + +To see the full list of available options, see the file +[`https://github.com/archivesspace/archivesspace/blob/master/common/config/config-defaults.rb`](https://github.com/archivesspace/archivesspace/blob/master/common/config/config-defaults.rb) + +### Preserving Patron Privacy + +The **:block_referrer** key in the configuration file (default: **true**) determines whether the full referring URL is +transmitted when the user clicks a link to a website outside the web domain of your instance of ArchivesSpace. This +protects your patrons from tracking by that site. + +### Main Navigation Menu + +You can choose not to display one or more of the links on the main +(horizontal) navigation menu, either globally or by repository, if you +have more than one repository. You manage this through the +`:pui_hide` options in the `config/config.rb` file. + +### Repository Customization + +#### Display of "badges" on the Repository page + +You can configure which badges appear on the Repository page, both +globally or by repository. See the `:pui_hide` configuration options +for these too. + +### Activation of the "Request" button on archival object pages + +You can configure, both globally or by repository, whether the +"Request" button is active on archival object pages for objects that +don't have an associated Top Container. See the +`:pui_requests_permitted_for_containers_only` configuration option to +modify this. + +### I18n + +You can change the text and labels used by the PUI by editing the +locale files under the `locales/public` directory of your +ArchivesSpace distribution. + +### Addition of a "lead paragraph" + +You can also use the custom `.yml` files, described above, to add a +custom "lead paragraph" (including html markup) for one or more of +your repositories, keyed to the repository's code. + +For example, if your repository, `My Wonderful Repository` has a code of `MWR`, this is what you might see in the +custom `en.yml`: + +```yaml +en: + repos: + mwr: + lead_graph: This <strong>amazing</strong> repository has so much to offer you! +``` + +## Development + +To run a development server, the PUI follows the same pattern as the rest of ArchivesSpace. From your ArchivesSpace checkout: + +```shell + # Prepare all dependencies + build/run bootstrap + + # Run the backend development server (and Solr) + build/run backend:devserver + + # Run the indexer + build/run indexer + + # Finally, run the PUI itself + build/run public:devserver +``` + +## Inheritance + +### Three options for inheritance: + +- Directly inherit a value for a field – the record has no value for the field and you want the value in the field to display as if it were the record’s own [uncomment the inheritance section in the config, set desired field (property) to inherit_directly => true] +- Indirectly inherit a value for a field – the record has no value for the field and you want to display the value from a higher level, but precede it with a note that indicates that it comes from that higher level, such as "From the collection" [uncomment the inheritance section in the config, set desired field (property) to inherit_directly => false] +- Don’t display the field at all – the record has no value of its own for the field and you don’t want it to display at all [uncomment the inheritance section in the config, delete the lines for the desired field (property)] + +### Archival Inheritance + +With the new version of the Public Interface, all elements of description can be inherited. This is especially important since the PUI displays each level of description as its own webpage. + +Each element of description can be inherited either directly or indirectly. When an element is inherited directly, it will appear as if that element was attached directly to that archival object in the staff interface. When an element is inherited indirectly, it will appear on the lower-level of description in the public interface, but the inherited element will be preceded with a note indicating the level of the ancestor from which the note is inherited (e.g. From the Collection, or From the Sub-Series). In both cases, the element will only be inherited if it is missing from the archival object. Additionally, the element of description will only be inherited from the closest ancestor. In other words, if a top-level collection record has an access restrictions note, and a child-level series record has an an access restrictions note, but the sub-series child of that series record lacks an access restrictions note, then the sub-series record will inherit only the access restrictions note from its parent series record. + +Additionally, the identifier element in ArchivesSpace, which is better known as the Reference Code in ISAD-G and DACS, can be inherited in a composite manner. When inherited in a composite manner, the inherited elements will be concatenated together. In other words, an identifier at the item level could look like this: MSS 1. Series A. Item 1. Though the archival object has an identifier of "Item 1", a composite identifier is displayed since the series-level record to which the item belongs has an identifier of "Series A", which in turn also belongs to a collection-level record that has an identifier of "MSS 1". + +By default, the following elements are turned on for inheritance: + +- Title (direct inheritance) +- Identifier (indirect inheritance), but by default the identifier inherits from ancestor archival objects only; it does NOT include the resource identifier. + +Also, it is advised to inherit this element in a composite fashion once it is determined whether the level of description should or should not display as part of the identifier, which will depend upon local data-entry practices + +- Language code (direct inheritance, but it does NOT display anywhere in the interface currently; eventually, this could be used for faceting) +- Dates (direct inheritance) +- Extents (indirect inheritance) +- Creator (indirect inheritance) +- Access restrictions note (direct inheritance) +- Scope and contents note (indirect inheritance) +- Language of Materials note (indirect inheritance, but there seems to be a bug right now so that the Language notes always show up as being directly inherited. See AR-XXXX) + +See https://github.com/archivesspace/archivesspace/blob/master/common/config/config-defaults.rb#L296-L396 for more information and examples. + +Also, a video overview of this feature, which was recorded before development was finished, is available online: +https://vimeo.com/195457286 + +Composite Identifier Inheritance + +If you add the following three lines to your configuration file, re-start ArchivesSpace, and then let the indexer re-index your records, you can gain the benefit of composite identifiers: + +```ruby +AppConfig[:record_inheritance][:archival_object][:composite_identifiers] = { +:include_level => true, +:identifier_delimiter => '. ' +} +``` + +To add extra fields, such as subjects you can add the following: + +```ruby +inherited_fields_extras = [ + { + code: 'subjects', + property: 'subjects', + inherit_if: proc { |json| json.select { |j| true } }, + inherit_directly: false, + }, +] +``` + +When you set include_level to true, that means the archival object level will be included in the identifier so that you don't have to repeat that data. For example, if the level of description is "Series" and the archival object identifier is "1", and the parent resource identifier is "MSS 1", then the composite identifier would display as "MSS 1. Series 1" at the series 1 level, and any descendant record. If you set include_level to false, then the display would be "MSS 1. 1" + +### License + +ArchivesSpace is released under the [Educational Community License, +version 2.0](http://opensource.org/licenses/ecl2.php). See the +[COPYING](https://github.com/archivesspace/archivesspace/blob/master/COPYING) file for more information. diff --git a/src/content/docs/de/architecture/search.md b/src/content/docs/de/architecture/search.md new file mode 100644 index 0000000..6320831 --- /dev/null +++ b/src/content/docs/de/architecture/search.md @@ -0,0 +1,46 @@ +--- +title: Search indexing +description: Explanation of how ArchivesSpace uses Solr for indexing added/updated/deleted records and the differences between the periodic and real-time modes of indexing records. +--- + +The ArchivesSpace system uses Solr for its full-text search. As +records are added/updated/deleted by the backend, the corresponding +changes are made to the Solr index to keep them (roughly) +synchronized. + +Keeping the backend and Solr in sync is the job of the "indexer", a +separate process that runs in the background and watches for record +updates. The indexer operates in two modes simultaneously: + +- The periodic mode polls the backend to get a list of records that + were added/modified/deleted since it last checked. These changes + are propagated to the Solr index. This generally happens every 30 + to 60 seconds (and is configurable). +- The real-time mode responds to updates as they happen, applying + changes to Solr as soon as they're applied to the backend. This + aims to reflect updates within the search indexes in milliseconds + or seconds. + +The two modes of operation overlap somewhat, but they serve different +purposes. The periodic mode ensures that records are never missed due +to transient failures, and will bring the indexes up to date even if +the indexer hasn't run for quite some time--even creating them from +scratch if necessary. This mode is also used for indexing updates +made by bulk import processes and other updates that don't need to be +reflected in the indexes immediately. + +The real-time indexer mode attempts to apply updates to the index much +more quickly. Rather than polling, it performs a `GET` request +against the `/update-feed` endpoint of the backend. This endpoint +returns any records that were updated since the last time it was asked +and, most importantly, it leaves the request hanging if no records +have changed. + +By calling this endpoint in a loop, the real-time indexer spends most +of its time sitting around waiting for something to happen. The +moment a record is updated, the already-pending request to the +`/update-feed` endpoint yields the updated record, which is sent to +Solr and indexed immediately. This avoids the delays associated with +polling and keeps indexing latency low where it matters. For example, +newly created records should appear in the browse list by the time a +user views it. diff --git a/src/content/docs/de/customization/authentication.md b/src/content/docs/de/customization/authentication.md new file mode 100644 index 0000000..e68959a --- /dev/null +++ b/src/content/docs/de/customization/authentication.md @@ -0,0 +1,139 @@ +--- +title: Additional authentication +description: Instructions on how to install and configure a custom authentication handler via a plugin. +--- + +ArchivesSpace supports LDAP-based authentication out of the box, but you can +authenticate against other password-based user directories by defining your own +authentication handler, creating a plug-in, and configuring your ArchivesSpace +instance to use it. If you would rather not have to create your own handler, +there is a [plugin](https://github.com/lyrasis/aspace-oauth) available that uses OAUTH user authentication that you can add +to your ArchivesSpace installation. + +## Creating a new authentication handler class to use in a plug-in + +An authentication handler is just a class that implements a couple of +key methods: + +- `initialize(opts)` -- An object constructor which receives the + configuration block specified in the system's configuration. +- `name` -- A zero-argument method which just returns a string that + identifies the instance of your handler. The format of this + string isn't important: it just gets stored as a user attribute + (in the ArchivesSpace database) to make it possible to tell which + authentication source a user last successfully authenticated + against. +- `authenticate(username, password)` -- a method which checks + whether `password` is the correct password for `username`. If the + password is correct, returns an instance of `JSONModel(:user)`. + Otherwise, returns `nil`. + +A new instance of your handler will be created for each login attempt, +so there's no need to handle concurrency in your implementation. + +Your `authenticate` method can do whatever is required to check that +the provided password is correct, with the only constraint being that +it must return either `nil` or a `JSONModel(:user)` instance. + +The `JSONModel(:user)` class (whose JSON schema is defined in +`common/schemas/user.rb`) defines the set of properties that the +system needs for a user. When you return a `JSONModel(:user)` object, +its values will be used to create an ArchivesSpace user (if a user by +that name didn't exist already), or update the existing user (if they +were already known). + +**Note**: `The JSONModel(:user)` class validates the values you give it +against its JSON schema and throws an `JSONModel::ValidationException` +if anything isn't right. If this happens within your handler, the +exception will be logged and the authentication request will fail. + +### A skeleton implementation + +Suppose you already have a database with a table containing users that +should be able to log in to ArchivesSpace. Below is a sketch of an +authentication handler that will connect to this database and use it +for authentication. + +```ruby +# For this example we'll use the Sequel database toolkit. Note that +# this isn't necessary--you could use whatever database library you +# like here. +require 'sequel' + +class MyDatabaseAuth + + # For easy access to the JSONModel(:user) class + include JSONModel + + + def initialize(definition) + # Store the database connection details for use at + # authentication time. + @db_url = definition[:db_url] or raise "Need a value for :db_url" + end + + + # Just for informational purposes. Return a string containing our + # database URL. + def name + "MyDatabaseAuth - #{@db_url}" + end + + + def authenticate(username, password) + # Open a connection to the database + Sequel.connect(@db_url) do |db| + + # Check whether we have an entry for the given username + # and password in our database's "users" table + user = db[:users].filter(:username => username, + :password => password). + first + + if !user + # The user couldn't be found, or their password was wrong. + # Authentication failed. + return nil + end + + # Build and return a JSONModel(:user) instance from fields in the database + JSONModel(:user).from_hash(:username => username, + :name => user[:user_full_name]) + + end + end + +end +``` + +In order to use your new authentication handler, you'll need to add it to the plug-in +architecture in ArchivesSpace and enable it. Create a new directory, called our_auth +perhaps, in the plugins directory of your ArchivesSpace installation. Inside +that directory create this directory hierarchy `backend/model/` and place the +new class file there. Next, configure the new handler. + +## Modifying your configuration + +To have ArchivesSpace invoke your new authentication handler, just add +a new entry to the `:authentication_sources` configuration block in the +`config/config.rb` file. + +A configuration for the above example might be as follows: + +```ruby +AppConfig[:authentication_sources] = [{ + :model => 'MyDatabaseAuth', + :db_url => 'jdbc:mysql://localhost:3306/somedb?user=myuser&password=mypassword', + }] +``` + +## Add the plug-in to the list of plug-ins already enabled + +In the `config/config.rb` file, find the setting of AppConfig[:plugins] and add +a reference to the new plug-in there. For example, if you named it our_auth, the +AppConfig[:plugins] setting may look something like this: + +AppConfig[:plugins] = ['local', 'hello_world', 'our_auth'] + +Restart your ArchivesSpace installation and you should now see authentication +requests hitting your new handler. diff --git a/src/content/docs/de/customization/bower.md b/src/content/docs/de/customization/bower.md new file mode 100644 index 0000000..1197f7f --- /dev/null +++ b/src/content/docs/de/customization/bower.md @@ -0,0 +1,68 @@ +--- +title: Managing frontend assets with Bower +description: Instructions on how to add static assests to the core project. +--- + +This is aimed at developers and applies to the 'frontend' application only. + +If you wish to add static assets to the core project (i.e., javascript, css, +less files) please use `bower` to add and install them so we know what's what +and when to upgrade. + +If you wish to do a good deed for ArchivesSpace you can track down the source +of any vendor assets not included in bower.json and get them updated and +installed according to this protocol. + +## General Setup + +### Step 1: install npm + +On OSX, for example: + +```shell +brew install npm +``` + +### Step 2: install Bower + +```shell +npm install bower -g +``` + +### Step 3: install components + +```shell +bower install +``` + +## Adding a static asset to ASpace Frontend (Staff UI) + +### Step 1: add the component + +```shell +bower install <PACKAGE NAME> --save +``` + +### Step 2: map Bower > Rails + + Edit the bower.json file to map the assets you want from bower_components + to assets. See examples in bower.json + This is kind of a hack to workaround: + https://github.com/blittle/bower-installer/issues/75 + +### Step 3: Install assets + +```shell +alias npm-exec='PATH=$(npm bin):$PATH' +npm-exec bower-installer +``` + +### Step 4: Check assets in + +Check the installed assets into Git. We version control bower.json and the +installed files, but not the bower_components directory. + +### Production! + +Don't forget - if you are adding assets that don't have a .js extension, you +need to add them to frontend/config/environments/production.rb diff --git a/src/content/docs/de/customization/configuration.md b/src/content/docs/de/customization/configuration.md new file mode 100644 index 0000000..ef98c89 --- /dev/null +++ b/src/content/docs/de/customization/configuration.md @@ -0,0 +1,1249 @@ +--- +title: Configuration +description: Lists all available configuration options available within the config/config.rb file, including configuration names, values, and suggestions for setup. +--- + +The primary configuration for ArchivesSpace is done in the config/config.rb +file. By default, this file contains the default settings, which are indicated +by commented out lines ( indicated by the "#" in the file ). You can adjust these +settings by adding new lines that change the default and restarting +ArchivesSpace. Be sure that your new settings are not commented out +( i.e. do NOT start with a "#" ), otherwise the settings will not take effect. + +## Commonly changed settings + +### Database config + +#### :db_url + +Set your database name and credentials. The default specifies that the embedded database should be used. +It is recommended to use a MySQL database instead of the embedded database. +For more info, see [Using MySQL](/provisioning/mysql) + +This is an example of specifying MySQL credentials: + +`AppConfig[:db_url] = "jdbc:mysql://127.0.0.1:3306/aspace?useUnicode=true&characterEncoding=UTF-8&user=as&password=as123"` + +#### :db_max_connections + +Set the maximum number of database connections used by the application. +Default is derived from the number of indexer threads. + +`AppConfig[:db_max_connections] = proc { 20 + (AppConfig[:indexer_thread_count] * 2) }` + +### URLs for ArchivesSpace components + +Set the ArchivesSpace backend port. The backend listens on port 8089 by default. + +`AppConfig[:backend_url] = "http://localhost:8089"` + +Set the ArchivesSpace staff interface (frontend) port. The staff interface listens on port 8080 by default. + +`AppConfig[:frontend_url] = "http://localhost:8080"` + +Set the ArchivesSpace public interface port. The public interface listens on port 8081 by default. + +`AppConfig[:public_url] = "http://localhost:8081"` + +Set the ArchivesSpace OAI server port. The OAI server listens on port 8082 by default. + +`AppConfig[:oai_url] = "http://localhost:8082"` + +Set the ArchivesSpace Solr index port. The Solr server listens on port 8090 by default. + +`AppConfig[:solr_url] = "http://localhost:8090"` + +Set the ArchivesSpace indexer port. The indexer listens on port 8091 by default. + +`AppConfig[:indexer_url] = "http://localhost:8091"` + +Set the ArchivesSpace API documentation port. The API documentation listens on port 8888 by default. + +`AppConfig[:docs_url] = "http://localhost:8888"` + +### Enabling ArchivesSpace components + +Enable or disable specific componenets by setting the following settings to true or false (defaults to true): + +```ruby +AppConfig[:enable_backend] = true +AppConfig[:enable_frontend] = true +AppConfig[:enable_public] = true +AppConfig[:enable_solr] = true +AppConfig[:enable_indexer] = true +AppConfig[:enable_docs] = true +AppConfig[:enable_oai] = true +``` + +### Application logging + +By default, all logging will be output on the screen while the archivesspace command +is running. When running as a daemon/service, this is put into a file in +`logs/archivesspace.out`. You can route log output to a different file per component by changing the log value to +a filepath that archivesspace has write access to. + +You can also set the logging level for each component. Valid values are: + +- `debug` (everything) +- `info` +- `warn` +- `error` +- `fatal` (severe only) + +#### `AppConfig[:frontend_log]` + +File for log output for the frontend (staff interface). Set to "default" to +route log output to archivesspace.out. + +#### `#AppConfig[:frontend_log_level]` + +Logging level for the frontend. + +#### `AppConfig[:backend_log]` + +File for log output for the backend. Set to "default" to +route log output to archivesspace.out. + +#### `#AppConfig[:backend_log_level]` + +Logging level for the backend. + +#### `AppConfig[:pui_log]` + +File for log output for the public UI. Set to "default" to +route log output to archivesspace.out. + +#### `#AppConfig[:pui_log_level]` + +Logging level for the public UI. + +#### `AppConfig[:indexer_log]` + +File for log output for the indexer. Set to "default" to +route log output to archivesspace.out. + +#### `#AppConfig[:indexer_log_level]` + +Logging level for the indexer. + +### Database logging + +#### `AppConfig[:db_debug_log]` + +Set to true to log all SQL statements. +Note that this will have a performance impact! + +`AppConfig[:db_debug_log] = false` + +#### `AppConfig[:mysql_binlog]` + +Set to true if you have enabled MySQL binary logging. + +`AppConfig[:mysql_binlog] = false` + +### Solr backups + +#### `AppConfig[:solr_backup_schedule]` + +Set Solr back up schedule. By default, Solr backups will run at midnight. See https://crontab.guru/ for +information about the schedule syntax. + +`AppConfig[:solr_backup_schedule] = "0 * * * *"` + +#### `AppConfig[:solr_backup_number_to_keep]` + +Number of Solr backups to keep (default = 1) + +`AppConfig[:solr_backup_number_to_keep] = 1` + +#### `AppConfig[:solr_backup_directory]` + +Directory to store Solr backups. + +`AppConfig[:solr_backup_directory] = proc { File.join(AppConfig[:data_directory], "solr_backups") }` + +### Default Solr params + +#### `AppConfig[:solr_params]` + +Add default solr params. + +A simple example: use AND for search: + +`AppConfig[:solr_params] = { "q.op" => "AND" }` + +A more complex example: set the boost query value (bq) to boost the relevancy +for the query string in the title, set the phrase fields parameter (pf) to boost +the relevancy for the title when the query terms are in close proximity to each +other, and set the phrase slop (ps) parameter for the pf parameter to indicate +how close the proximity should be: + +```ruby +AppConfig[:solr_params] = { + "bq" => proc { "title:\"#{@query_string}\"*" }, + "pf" => 'title^10', + "ps" => 0, +} +``` + +### Language + +#### `AppConfig[:locale]` + +Set the application's language (see the .yml files in +https://github.com/archivesspace/archivesspace/tree/master/common/locales +for a list of available locale codes). Default is English (:en): + +`AppConfig[:locale] = :en` + +### Plugin registration + +#### `AppConfig[:plugins]` + +Plug-ins to load. They will load in the order specified. + +`AppConfig[:plugins] = ['local', 'lcnaf']` + +### Thread count + +#### `AppConfig[:job_thread_count]` + +The number of concurrent threads available to run background jobs. +Introduced because long running jobs were blocking the queue. +Resist the urge to set this to a big number! + +`AppConfig[:job_thread_count] = 2` + +### OAI configuration options + +**NOTE: As of version 2.5.2, the following parameters (oai_repository_name, oai_record_prefix, and oai_admin_email) have been deprecated. They should be set in the Staff User Interface. To set them, select the System menu in the Staff User Interface and then select Manage OAI-PMH Settings. These three settings are at the top of the page in the General Settings section. These settings will be completely removed from the config file when version 2.6.0 is released.** + +#### `AppConfig[:oai_repository_name]` + +`AppConfig[:oai_repository_name] = 'ArchivesSpace OAI Provider'` + +#### `AppConfig[:oai_record_prefix]` + +`AppConfig[:oai_record_prefix] = 'oai:archivesspace'` + +#### `AppConfig[:oai_admin_email]` + +`AppConfig[:oai_admin_email] = 'admin@example.com'` + +#### `AppConfig[:oai_sets]` + +In addition to the sets based on level of description, you can define OAI Sets +based on repository codes and/or sponsors as follows: + +```ruby +AppConfig[:oai_sets] = { + 'repository_set' => { + :repo_codes => ['hello626'], + :description => "A set of one or more repositories", + }, + + 'sponsor_set' => { + :sponsors => ['The_Sponsor'], + :description => "A set of one or more sponsors", + }, +} +``` + +## Other less commonly changed settings + +### Default admin password + +#### `AppConfig[:default_admin_password]` + +Set default admin password. Default password is "admin". + +`#AppConfig[:default_admin_password] = "admin"` + +### Data directories + +#### `AppConfig[:data_directory]` + +If you run ArchivesSpace using the standard scripts (archivesspace.sh, +archivesspace.bat or as a Windows service), the value of :data_directory is +automatically set to be the "data" directory of your ArchivesSpace +distribution. You don't need to change this value unless you specifically +want ArchivesSpace to put its data files elsewhere. + +`AppConfig[:data_directory] = File.join(Dir.home, "ArchivesSpace")` + +#### `AppConfig[:backup_directory]` + +Directory to store automated backups when using the embedded demo database (Apache Derby instead of MySQL). This defaults to `demo_db_backups` within the `data` directory. + +`AppConfig[:backup_directory] = proc { File.join(AppConfig[:data_directory], "demo_db_backups") }` + +### Solr defaults + +#### `AppConfig[:solr_indexing_frequency_seconds]` + +The number of seconds between each run of the SUI and PUI indexers. The indexers will perform and indexing cycle every configured number of seconds. + +`AppConfig[:solr_indexing_frequency_seconds] = 30` + +#### `AppConfig[:solr_facet_limit]` + +The maximum number of distinct facet terms Solr will include in the response for a given field. + +`AppConfig[:solr_facet_limit] = 100` + +#### `AppConfig[:default_page_size]` + +The number of records included in each page in all paginated backend api responses. +`AppConfig[:default_page_size] = 10` + +#### `AppConfig[:max_page_size]` + +Requests to the backend api can define a custom page_size param. This is the maximum allowed page size. +`AppConfig[:max_page_size] = 250` + +### Cookie prefix + +#### `AppConfig[:cookie_prefix]` + +A prefix added to cookies used by the application. +Change this if you're running more than one instance of ArchivesSpace on the +same hostname (i.e. multiple instances on different ports). +Default is "archivesspace". + +`AppConfig[:cookie_prefix] = "archivesspace"` + +### SUI Indexer settings + +The size of each batch of records passed to each indexer worker-thread to process and push to solr. +The periodic indexer can run using multiple threads to take advantage of +multiple CPU cores. By setting these two options, you can control how many +CPU cores are used, and the amount of memory that will be consumed by the +indexing process (more cores and/or more records per thread means more memory used). + +#### `AppConfig[:indexer_records_per_thread]` + +The size of each batch of records passed to each indexer worker-thread to process and push to solr. More records per thread means that more memory will be used by the indexer process. +`AppConfig[:indexer_records_per_thread] = 25` + +#### `AppConfig[:indexer_thread_count]` + +The number of worker-thread to be used by the SUI indexer. More worker-threads means that more CPU cores will be used. +`AppConfig[:indexer_thread_count] = 4` + +#### `AppConfig[:indexer_solr_timeout_seconds]` + +The indexer is making requests to solr in order to push updated records to the solr index. This is the maximum number of seconds that the indexer will wait for solr to respond to a request. + +`AppConfig[:indexer_solr_timeout_seconds] = 300` + +### PUI Indexer Settings + +#### `AppConfig[:pui_indexer_enabled]` + +If false no pui indexer is started. Set to false if not using the PUI at all. +`AppConfig[:pui_indexer_enabled] = true` + +#### `AppConfig[:pui_indexing_frequency_seconds]` + +The number of seconds between each run of the PUI indexer. The indexer will perform and indexing cycle every configured number of seconds. +`AppConfig[:pui_indexing_frequency_seconds] = 30` + +#### `AppConfig[:pui_indexer_records_per_thread]` + +The size of each batch of records passed to each indexer worker-thread to process and push to solr. +The PUI indexer can run using multiple threads to take advantage of +multiple CPU cores. By setting these two options, you can control how many +CPU cores are used, and the amount of memory that will be consumed by the +indexing process (more cores and/or more records per thread means more memory used). + +`AppConfig[:pui_indexer_records_per_thread] = 25` + +#### `AppConfig[:pui_indexer_thread_count]` + +The number of worker-thread to be used by the PUI indexer. More worker-threads means that more CPU cores will be used. +`AppConfig[:pui_indexer_thread_count] = 1` + +### Index state + +#### `AppConfig[:index_state_class]` + +The indexer needs a place to store it's state (keep track of which records have already been indexed). +Set to 'IndexState' (default) to store the state in the local `data` directory. +Set to 'IndexStateS3' (optional) to store the state in an AWS S3 bucket in the Amazon Cloud. + +`AppConfig[:index_state_class] = 'IndexState'` + +#### `AppConfig[:index_state_s3]` - Relevant only when using S3 storage for the indexer state + +If using S3 storage for the indexer state in amazon s3 (optional), you need to configure the access to S3. + +NOTE: S3 charges for read / update requests and the pui indexer is continually +writing to state files so you may want to increase `pui_indexing_frequency_seconds` and `solr_indexing_frequency_seconds` + +##### Configuring S3 access using environment variables (default) + +By default, the S3 configuration is fetched from the following shell environment variables: + +- `AWS_REGION` +- `AWS_ACCESS_KEY_ID` +- `AWS_SECRET_ACCESS_KEY` +- `AWS_ASPACE_BUCKET` + +It is using the `:cookie_prefix` configuration as a prefix for the state files stored in the bucket - usefull when using the same bucket to store indexer state of multiple archivesspace instances. + +##### Configuring S3 access using AppConfig variable in the `config.rb` file + +```ruby +AppConfig[:index_state_s3] = { + region: "us-east-1", + aws_access_key_id: "ASIAXXXXEXAMPLEID", + aws_secret_access_key: "xXxxXXxxXX/XXXXXX/XXXXXXXEXAMPLEKEY", + bucket: ENV.fetch("my-as-test-bucket"), + prefix: proc { "#{AppConfig[:cookie_prefix]}_" }, +} +``` + +You can use `prefix: "some random string"` instead of the above code that used the `:cookie_prefix` AppConfig variable. + +### Misc. database options + +#### `AppConfig[:allow_other_unmapped]` + +Allow assigning the special enumeration value `other_unmapped` for dynamic enum (controlled value) fields. When set to `true` `other_unmapped` is treated as a valid value for all enumeration (controlled value) fields. The `other_unmapped` value is added as a possible value for all controlled value lists. +This feature is designed for handling unmapped or unknown enumeration values, eventually useful during data migrations where source data may have values not yet defined in controlled value lists, or generally importing external data that uses values that are not already defined in a controlled value list. + +`AppConfig[:allow_other_unmapped] = false` + +#### `AppConfig[:db_url_redacted]` + +This is how the database url (which includes the database username and password) will appear in the logs. The default replaces the username and password with `REDACTED`, so that: +`"user=john&password=secret123"` +becomes +`"user=[REDACTED]&password=[REDACTED]"` + +`AppConfig[:db_url_redacted] = proc { AppConfig[:db_url].gsub(/(user|password)=(.*?)(&|$)/, '\1=[REDACTED]\3') }` + +#### `AppConfig[:demo_db_backup_schedule]` + +When using the embedded demo database (Apache Derby instead of MySQL) this is the schedule of the automated backups, in cron format. By default, it is at 4AM every day. + +`AppConfig[:demo_db_backup_schedule] = "0 4 * * *"` + +#### `AppConfig[:demo_db_backup_number_to_keep] = 7` + +How many backups to keep available when using the embedded demo database + +`AppConfig[:demo_db_backup_number_to_keep] = 7` + +#### `AppConfig[:allow_unsupported_database]` + +Set this to true if you are determined to use a database other than MySQL or the embedded demo database based on Apache Derby (not-recommended!). + +`AppConfig[:allow_unsupported_database] = false` + +#### `AppConfig[:allow_non_utf8_mysql_database]` + +Set this to true to skip the standard validation of the character encoding of MySQL tables being set to UTF8 (not-recommended!). + +`AppConfig[:allow_non_utf8_mysql_database] = false` + +### Proxy URLs + +If you are serving user-facing applications via proxy +(i.e., another domain or port, or via https, or for a prefix) it is +recommended that you record those URLs in your configuration + +#### `AppConfig[:frontend_proxy_url] = proc { AppConfig[:frontend_url] }` + +Proxy URL for the frontend (staff interface) + +`AppConfig[:frontend_proxy_url] = proc { AppConfig[:frontend_url] }` + +#### `AppConfig[:public_proxy_url]` + +Proxy URL for the public interface + +`AppConfig[:public_proxy_url] = proc { AppConfig[:public_url] }` + +#### `AppConfig[:oai_proxy_url]` + +Proxy URL for the oai service (if exposed, see OAI section) + +`AppConfig[:oai_proxy_url] = 'http://your-public-oai-url.example.com'` + +#### `AppConfig[:frontend_proxy_prefix]` + +Don't override this setting unless you know what you're doing + +`AppConfig[:frontend_proxy_prefix] = proc { "#{URI(AppConfig[:frontend_proxy_url]).path}/".gsub(%r{/+$}, "/") }` + +#### `AppConfig[:public_proxy_prefix]` + +Don't override this setting unless you know what you're doing + +`AppConfig[:public_proxy_prefix] = proc { "#{URI(AppConfig[:public_proxy_url]).path}/".gsub(%r{/+$}, "/") }` + +### Enable component applications + +Setting any of these false will prevent the associated applications from starting. +Temporarily disabling the frontend and public UIs and/or the indexer may help users +who are running into memory-related issues during migration. + +#### `AppConfig[:enable_backend]` + +`AppConfig[:enable_backend] = true` + +#### `AppConfig[:enable_frontend]` + +`AppConfig[:enable_frontend] = true` + +#### `AppConfig[:enable_public]` + +`AppConfig[:enable_public] = true` + +#### `AppConfig[:enable_solr]` + +`AppConfig[:enable_solr] = true` + +#### `AppConfig[:enable_indexer]` + +`AppConfig[:enable_indexer] = true` + +#### `AppConfig[:enable_docs]` + +`AppConfig[:enable_docs] = true` + +#### `AppConfig[:enable_oai]` + +`AppConfig[:enable_oai] = true` + +### Jetty shutdown + +Some use cases want the ability to shutdown the Jetty service using Jetty's +ShutdownHandler, which allows a POST request to a specific URI to signal +server shutdown. The prefix for this URI path is set to `/xkcd` to reduce the +possibility of a collision in the path configuration. So, full path would be + +`/xkcd/shutdown?token={randomly generated password}` + +The launcher creates a password to use this, which is stored +in the data directory. This is not turned on by default. + +#### `AppConfig[:use_jetty_shutdown_handler]` + +`AppConfig[:use_jetty_shutdown_handler] = false` + +#### `AppConfig[:jetty_shutdown_path]` + +`AppConfig[:jetty_shutdown_path] = "/xkcd"` + +### Managing multile backend instances + +If you have multiple instances of the backend running behind a load +balancer, list the URL of each backend instance here. This is used by the +real-time indexing, which needs to connect directly to each running +instance. + +By default we assume you're not using a load balancer, so we just connect +to the regular backend URL. + +#### `AppConfig[:backend_instance_urls]` + +`AppConfig[:backend_instance_urls] = proc { [AppConfig[:backend_url]] }` + +### Theme + +For theming customization, see https://docs.archivesspace.org/customization/theming/ + +#### `AppConfig[:frontend_theme]` + +Name of the theme to use on the Staff UI + +`AppConfig[:frontend_theme] = "default"` + +#### `AppConfig[:public_theme]` + +Name of the theme to use on the Public UI + +`AppConfig[:public_theme] = "default"` + +### Session expiration + +#### `AppConfig[:session_expire_after_seconds]` + +Sessions marked as expirable will timeout after this number of seconds of inactivity + +`AppConfig[:session_expire_after_seconds] = 3600` + +#### `AppConfig[:session_nonexpirable_force_expire_after_seconds]` + +Sessions marked as non-expirable will eventually expire too, but after a longer period. + +`AppConfig[:session_nonexpirable_force_expire_after_seconds] = 604800` + +### System usernames + +Hidden (not viewable on the Staff UI User management) system users are automatically created to be used by the indexer, the PUI and the Staff UI in order to access the backend API. + +#### `AppConfig[:search_username]` + +The user name of the hidden system user that the indexer uses to access the backend API +`AppConfig[:search_username] = "search_indexer"` + +#### `AppConfig[:public_username]` + +The user name of the hidden system user that the PUI uses to access the backend API + +`AppConfig[:public_username] = "public_anonymous"` + +#### `AppConfig[:staff_username]` + +The user name of the hidden system user that the Staff UI uses to access the backend API + +`AppConfig[:staff_username] = "staff_system"` + +### Authentication sources + +ArchivesSpace comes with its own user management functionality but can also be configured to authenticate against one or more [LDAP directories](/customization/ldap/). Oauth authentication is available using the [aspace-oauth plugin](https://github.com/lyrasis/aspace-oauth) + +`AppConfig[:authentication_sources] = []` + +### Misc. backlog and snapshot settings + +#### `AppConfig[:realtime_index_backlog_ms]` + +> TODO - Needs more documentation + +`AppConfig[:realtime_index_backlog_ms] = 60000` + +### Notifications configuration + +An internal notification mechanism is used to keep user preferences, enumeration (controlled value list) values, repository information etc. up to date within the UI while minimizing requests to the backend API. + +#### `AppConfig[:notifications_backlog_ms]` + +Notifications older that this amount of miliseconds are considered expired and will not be announced anymore. + +`AppConfig[:notifications_backlog_ms] = 60000` + +#### `AppConfig[:notifications_poll_frequency_ms]` + +How often should notifications be announced. + +`AppConfig[:notifications_poll_frequency_ms] = 1000` + +#### `AppConfig[:max_usernames_per_source]` + +> TODO - Needs more documentation + +`AppConfig[:max_usernames_per_source] = 50` + +#### `AppConfig[:demodb_snapshot_flag]` + +> TODO - Needs more documentation + +`AppConfig[:demodb_snapshot_flag] = proc { File.join(AppConfig[:data_directory], "create_demodb_snapshot.txt") }` + +### Report Configuration + +#### `AppConfig[:report_page_layout]` + +Uses valid values for the CSS3 @page directive's size property: +http://www.w3.org/TR/css3-page/#page-size-prop + +`AppConfig[:report_page_layout] = "letter"` + +#### `AppConfig[:report_pdf_font_paths]` + +> TODO - Needs more documentation + +`AppConfig[:report_pdf_font_paths] = proc { ["#{AppConfig[:backend_url]}/reports/static/fonts/dejavu/DejaVuSans.ttf"] }` + +#### `AppConfig[:report_pdf_font_family]` + +> TODO - Needs more documentation + +`AppConfig[:report_pdf_font_family] = "\"DejaVu Sans\", sans-serif"` + +### Plugins directory + +#### `AppConfig[:plugins_directory]` + +By default, the plugins directory will be in your ASpace Home. +If you want to override that, update this with an absolute path + +`AppConfig[:plugins_directory] = "plugins"` + +### Feedback + +#### `AppConfig[:feedback_url]` + +URL to direct the feedback link. +You can remove this from the footer by making the value blank. + +`AppConfig[:feedback_url] = "http://archivesspace.org/contact"` + +### User registration + +#### `AppConfig[:allow_user_registration]` + +Allow an unauthenticated user to create an account + +`AppConfig[:allow_user_registration] = true` + +### Help Configuration + +#### `AppConfig[:help_enabled]` + +> TODO - Needs more documentation + +`AppConfig[:help_enabled] = true` + +#### `AppConfig[:help_url]` + +> TODO - Needs more documentation + +`AppConfig[:help_url] = "https://archivesspace.atlassian.net/wiki/spaces/ArchivesSpaceUserManual/overview"`` + +#### `AppConfig[:help_topic_base_url]` + +> TODO - Needs more documentation + +`AppConfig[:help_topic_base_url] = "https://archivesspace.atlassian.net/wiki/spaces/ArchivesSpaceUserManual/pages/"`` + +### Shared storage + +#### `AppConfig[:shared_storage]` + +`AppConfig[:shared_storage] = proc { File.join(AppConfig[:data_directory], "shared") }` + +### Background jobs + +#### `AppConfig[:job_file_path]` + +Formerly known as :import_job_path + +> TODO - Needs more documentation + +`AppConfig[:job_file_path] = proc { AppConfig.has_key?(:import_job_path) ? AppConfig[:import_job_path] : File.join(AppConfig[:shared_storage], "job_files") }` + +#### `AppConfig[:job_poll_seconds]` + +> TODO - Needs more documentation + +`AppConfig[:job_poll_seconds] = proc { AppConfig.has_key?(:import_poll_seconds) ? AppConfig[:import_poll_seconds] : 5 }` + +#### `AppConfig[:job_timeout_seconds]` + +> TODO - Needs more documentation + +`AppConfig[:job_timeout_seconds] = proc { AppConfig.has_key?(:import_timeout_seconds) ? AppConfig[:import_timeout_seconds] : 300 }` + +#### `AppConfig[:jobs_cancelable]` + +By default, only allow jobs to be cancelled if we're running against MySQL (since we can rollback) + +`AppConfig[:jobs_cancelable] = proc { (AppConfig[:db_url] != AppConfig.demo_db_url).to_s }` + +### Locations + +#### `AppConfig[:max_location_range]` + +> TODO - Needs more documentation + +`AppConfig[:max_location_range] = 1000` + +### Schema Info check + +#### `AppConfig[:ignore_schema_info_check]` + +ASpace backend will not start if the db's schema_info version is not set +correctly for this version of ASPACE. This is to ensure that all the +migrations have run and completed before starting the app. You can override +this check here. Do so at your own peril. + +`AppConfig[:ignore_schema_info_check] = false` + +### Demo data + +#### `AppConfig[:demo_data_url]` + +This is a URL that points to some demo data that can be used for testing, +teaching, etc. To use this, set an OS environment variable of ASPACE_DEMO = true + +`AppConfig[:demo_data_url] = "https://s3-us-west-2.amazonaws.com/archivesspacedemo/latest-demo-data.zip"` + +### External IDs + +#### `AppConfig[:show_external_ids]` + +Expose external ids in the frontend + +`AppConfig[:show_external_ids] = false` + +### Jetty request/response buffer + +Set the allowed size of the request/response header that Jetty will accept +(anything bigger gets a 403 error). Note if you want to jack this size up, +you will also have to configure your Nginx/Apache as well if you're using that + +#### `AppConfig[:jetty_response_buffer_size_bytes]` + +`AppConfig[:jetty_response_buffer_size_bytes] = 64 * 1024` + +#### `AppConfig[:jetty_request_buffer_size_bytes]` + +`AppConfig[:jetty_request_buffer_size_bytes] = 64 * 1024` + +### Container management configuration fields + +#### `AppConfig[:container_management_barcode_length]` + +Defines global and repo-level barcode validations (validating on length only). +Barcodes that have either no value, or a value between :min and :max, will validate on save. +Set global constraints via :system_default, and use the repo_code value for repository-level constraints. +Note that :system_default will always inherit down its values when possible. + +`AppConfig[:container_management_barcode_length] = {:system_default => {:min => 5, :max => 10}, 'repo' => {:min => 9, :max => 12}, 'other_repo' => {:min => 9, :max => 9} }` + +#### `AppConfig[:container_management_extent_calculator]` + +Globally defines the behavior of the exent calculator. +Use :report_volume (true/false) to define whether space should be reported in cubic +or linear dimensions. +Use :unit (:feet, :inches, :meters, :centimeters) to define the unit which the calculator +reports extents in. +Use :decimal_places to define how many decimal places the calculator should return. + +Example: + +`AppConfig[:container_management_extent_calculator] = { :report_volume => true, :unit => :feet, :decimal_places => 3 }` + +### Record inheritance in public interface + +#### `AppConfig[:record_inheritance]` + +Define the fields for a record type that are inherited from ancestors +if they don't have a value in the record itself. +This is used in common/record_inheritance.rb and was developed to support +the new public UI application. +Note - any changes to record_inheritance config will require a reindex of pui +records to take affect. To do this remove files from indexer_pui_state + +```ruby +AppConfig[:record_inheritance] = { + :archival_object => { + :inherited_fields => [ + { + :property => 'title', + :inherit_directly => true + }, + { + :property => 'component_id', + :inherit_directly => false + }, + { + :property => 'language', + :inherit_directly => true + }, + { + :property => 'dates', + :inherit_directly => true + }, + { + :property => 'extents', + :inherit_directly => false + }, + { + :property => 'linked_agents', + :inherit_if => proc {|json| json.select {|j| j['role'] == 'creator'} }, + :inherit_directly => false + }, + { + :property => 'notes', + :inherit_if => proc {|json| json.select {|j| j['type'] == 'accessrestrict'} }, + :inherit_directly => true + }, + { + :property => 'notes', + :inherit_if => proc {|json| json.select {|j| j['type'] == 'scopecontent'} }, + :inherit_directly => false + }, + { + :property => 'notes', + :inherit_if => proc {|json| json.select {|j| j['type'] == 'langmaterial'} }, + :inherit_directly => false + }, + ] + } +} +``` + +To enable composite identifiers - added to the merged record in a property +`\_composite_identifier` + +The values for `:include_level` and `:identifier_delimiter` shown here are the defaults + +If `:include_level` is set to true then level values (eg Series) will be included in `\_composite_identifier` + +The `:identifier_delimiter` is used when joining the four part identifier for resources + +```ruby +AppConfig[:record_inheritance][:archival_object][:composite_identifiers] = { + :include_level => false, + :identifier_delimiter => ' ' +} +``` + +To configure additional elements to be inherited use this pattern in your config + +```ruby +AppConfig[:record_inheritance][:archival_object][:inherited_fields] << + { + :property => 'linked_agents', + :inherit_if => proc {|json| json.select {|j| j['role'] == 'subject'} }, + :inherit_directly => true + } +``` + +... or use this pattern to add many new elements at once + +```ruby +AppConfig[:record_inheritance][:archival_object][:inherited_fields].concat( + [ + { + :property => 'subjects', + :inherit_if => proc {|json| + json.select {|j| + ! j['_resolved']['terms'].select { |t| t['term_type'] == 'topical'}.empty? } + }, + :inherit_directly => true + }, + { + :property => 'external_documents', + :inherit_directly => false + }, + { + :property => 'rights_statements', + :inherit_directly => false + }, + { + :property => 'instances', + :inherit_directly => false + }, + ]) +``` + +If you want to modify any of the default rules, the safest approach is to uncomment +the entire default record_inheritance config and make your changes. +For example, to stop scopecontent notes from being inherited into file or item records +uncomment the entire record_inheritance default config above, and add a skip_if +clause to the scopecontent rule, like this: + +```ruby + { + :property => 'notes', + :skip_if => proc {|json| ['file', 'item'].include?(json['level']) }, + :inherit_if => proc {|json| json.select {|j| j['type'] == 'scopecontent'} }, + :inherit_directly => false + }, +``` + +### PUI Configurations + +#### `AppConfig[:pui_search_results_page_size]` + +`AppConfig[:pui_search_results_page_size] = 10` + +#### `AppConfig[:pui_branding_img]` + +`AppConfig[:pui_branding_img] = 'archivesspace.small.png'` + +#### `AppConfig[:pui_block_referrer]` + +`AppConfig[:pui_block_referrer] = true # patron privacy; blocks full 'referer' when going outside the domain` + +#### `AppConfig[:pui_max_concurrent_pdfs]` + +The number of PDFs we'll generate (in the background) at the same time. + +PDF generation can be a little memory intensive for large collections, so we +set this fairly low out of the box. + +`AppConfig[:pui_max_concurrent_pdfs] = 2` + +#### `AppConfig[:pui_pdf_timeout]` + +You can set this to nil or zero to prevent a timeout + +`AppConfig[:pui_pdf_timeout] = 600` + +#### `AppConfig[:pui_hide]` + +`AppConfig[:pui_hide] = {}` + +The following determine which 'tabs' are on the main horizontal menu: + +```ruby +AppConfig[:pui_hide][:repositories] = false +AppConfig[:pui_hide][:resources] = false +AppConfig[:pui_hide][:digital_objects] = false +AppConfig[:pui_hide][:accessions] = false +AppConfig[:pui_hide][:subjects] = false +AppConfig[:pui_hide][:agents] = false +AppConfig[:pui_hide][:classifications] = false +AppConfig[:pui_hide][:search_tab] = false +``` + +The following determine globally whether the various "badges" appear on the Repository page +can be overriden at repository level below (e.g.: +`AppConfig[:repos][{repo_code}][:hide][:counts] = true` + +```ruby +AppConfig[:pui_hide][:resource_badge] = false +AppConfig[:pui_hide][:record_badge] = true # hide by default +AppConfig[:pui_hide][:digital_object_badge] = false +AppConfig[:pui_hide][:accession_badge] = false +AppConfig[:pui_hide][:subject_badge] = false +AppConfig[:pui_hide][:agent_badge] = false +AppConfig[:pui_hide][:classification_badge] = false +AppConfig[:pui_hide][:counts] = false +``` + +The following determines globally whether the 'container inventory' navigation +tab/pill is hidden on resource/collection page + +``` +AppConfig[:pui_hide][:container_inventory] = false +``` + +#### `AppConfig[:pui_requests_permitted_for_types]` + +Determine when the request button is displayed + +`AppConfig[:pui_requests_permitted_for_types] = [:resource, :archival_object, :accession, :digital_object, :digital_object_component]` + +#### `AppConfig[:pui_requests_permitted_for_containers_only]` + +Set to 'true' if you want to disable if there is no top container + +`AppConfig[:pui_requests_permitted_for_containers_only] = false` + +#### `AppConfig[:pui_repos]` + +Repository-specific examples. Replace {repo_code} with your repository code, i.e. 'foo' - note the lower-case + +`AppConfig[:pui_repos] = {}` + +Examples: + +For a particular repository, only enable requests for certain record types (Note this configuration will override AppConfig[:pui_requests_permitted_for_types] for the repository) + +```ruby +AppConfig[:pui_repos]['foo'][:requests_permitted_for_types] = [:resource, :archival_object, :accession, :digital_object, :digital_object_component] +``` + +For a particular repository, disable request + +```ruby +AppConfig[:pui_repos]['foo'][:requests_permitted_for_containers_only] = true +``` + +Set the email address to send any repository requests: + +```ruby +AppConfig[:pui_repos]['foo'][:request_email] = {email address} +``` + +> TODO - Needs more documentation here + +```ruby +AppConfig[:pui_repos]['foo'][:hide] = {} +AppConfig[:pui_repos]['foo'][:hide][:counts] = true +``` + +#### `AppConfig[:pui_display_deaccessions]` + +> TODO - Needs more documentation + +`AppConfig[:pui_display_deaccessions] = true` + +#### `AppConfig[:pui_page_actions_cite]` + +Enable / disable PUI resource/archival object page 'cite' action + +`AppConfig[:pui_page_actions_cite] = true` + +#### `AppConfig[:pui_page_actions_bookmark]` + +Enable / disable PUI resource/archival object page 'bookmark' action + +`AppConfig[:pui_page_actions_bookmark] = true` + +#### `AppConfig[:pui_page_actions_request]` + +Enable / disable PUI resource/archival object page 'request' action + +`AppConfig[:pui_page_actions_request] = true` + +#### `AppConfig[:pui_page_actions_print]` + +Enable / disable PUI resource/archival object page 'print' action + +`AppConfig[:pui_page_actions_print] = true` + +#### `AppConfig[:pui_enable_staff_link]` + +When a user is authenticated, add a link back to the staff interface from the specified record + +`AppConfig[:pui_enable_staff_link] = true` + +#### `AppConfig[:pui_staff_link_mode]` + +By default, staff link will open record in staff interface in edit mode, +change this to 'readonly' for it to open in readonly mode + +`AppConfig[:pui_staff_link_mode] = 'edit'` + +#### `AppConfig[:pui_page_custom_actions]` + +Add page actions via the configuration + +`AppConfig[:pui_page_custom_actions] = []` + +Javascript action example: + +```ruby +AppConfig[:pui_page_custom_actions] << { + 'record_type' => ['resource', 'archival_object'], # the jsonmodel type to show for + 'label' => 'actions.do_something', # the I18n path for the action button + 'icon' => 'fa-paw', # the font-awesome icon CSS class + 'onclick_javascript' => 'alert("do something grand");', +} +``` + +Hyperlink action example: + +```ruby +AppConfig[:pui_page_custom_actions] << { + 'record_type' => ['resource', 'archival_object'], # the jsonmodel type to show for + 'label' => 'actions.do_something', # the I18n path for the action button + 'icon' => 'fa-paw', # the font-awesome icon CSS class + 'url_proc' => proc {|record| 'http://example.com/aspace?uri='+record.uri}, +} +``` + +Form-POST action example: + +```ruby +AppConfig[:pui_page_custom_actions] << { + 'record_type' => ['resource', 'archival_object'], # the jsonmodel type to show for + 'label' => 'actions.do_something', # the I18n path for the action button + 'icon' => 'fa-paw', # the font-awesome icon CSS class + # 'post_params_proc' returns a hash of params which populates a form with hidden inputs ('name' => 'value') + 'post_params_proc' => proc {|record| {'uri' => record.uri, 'display_string' => record.display_string} }, + # 'url_proc' returns the URL for the form to POST to + 'url_proc' => proc {|record| 'http://example.com/aspace?uri='+record.uri}, + # 'form_id' as string to be used as the form's ID + 'form_id' => 'my_grand_action', +} +``` + +ERB action example: + +```ruby +AppConfig[:pui_page_custom_actions] << { + 'record_type' => ['resource', 'archival_object'], + # the jsonmodel type to show for + # 'erb_partial' returns the path to an erb template from which the action will be rendered + 'erb_partial' => 'shared/my_special_action', +} +``` + +#### `AppConfig[:pui_email_enabled]` + +PUI email settings (logs emails when disabled) + +`AppConfig[:pui_email_enabled] = false` + +#### `AppConfig[:pui_email_override]` + +See above AppConfig[:pui_repos][{repo_code}][:request_email] for setting repository email overrides +'pui_email_override' for testing, this email will be the to-address for all sent emails + +`AppConfig[:pui_email_override] = 'testing@example.com'` + +#### `AppConfig[:pui_request_email_fallback_to_address]` + +The 'to' email address for repositories that don't define their own email + +`AppConfig[:pui_request_email_fallback_to_address] = 'testing@example.com'` + +#### `AppConfig[:pui_request_email_fallback_from_address]` + +The 'from' email address for repositories that don't define their own email + +`AppConfig[:pui_request_email_fallback_from_address] = 'testing@example.com'` + +#### `AppConfig[:pui_request_use_repo_email]` + +Use the repository record email address for requests (overrides config email) + +`AppConfig[:pui_request_use_repo_email] = false` + +#### `AppConfig[:pui_email_delivery_method]` + +`AppConfig[:pui_email_delivery_method] = :sendmail` + +#### `AppConfig[:pui_email_sendmail_settings]` + +```ruby +AppConfig[:pui_email_sendmail_settings] = { + location: '/usr/sbin/sendmail', + arguments: '-i' +} +``` + +#### `AppConfig[:pui_email_smtp_settings]` + +Apply when `AppConfig[:pui_email_delivery_method]` set to `:smtp` + +Example SMTP configuration: + +```ruby +AppConfig[:pui_email_smtp_settings] = { + address: 'smtp.gmail.com', + port: 587, + domain: 'gmail.com', + user_name: '<username>', + password: '<password>', + authentication: 'plain', + enable_starttls_auto: true, +} +``` + +#### `AppConfig[:pui_email_perform_deliveries]` + +`AppConfig[:pui_email_perform_deliveries] = true` + +#### `AppConfig[:pui_email_raise_delivery_errors]` + +`AppConfig[:pui_email_raise_delivery_errors] = true` + +#### `AppConfig[:pui_readmore_max_characters]` + +The number of characters to truncate before showing the 'Read More' link on notes + +`AppConfig[:pui_readmore_max_characters] = 450` + +#### `AppConfig[:pui_expand_all]` + +Whether to expand all additional information blocks at the bottom of record pages by default. `true` expands all blocks, `false` collapses all blocks. + +`AppConfig[:pui_expand_all] = false` + +#### `AppConfig[:max_search_columns]` + +Use to specify the maximum number of columns to display when searching or browsing + +`AppConfig[:max_search_columns] = 7` diff --git a/src/content/docs/de/customization/index.md b/src/content/docs/de/customization/index.md new file mode 100644 index 0000000..fd97d72 --- /dev/null +++ b/src/content/docs/de/customization/index.md @@ -0,0 +1,13 @@ +--- +title: Customization and configuration +description: Index of the pages within the Customization section of the website. +--- + +- [Configuring ArchivesSpace](./configuration) +- [Configuring LDAP authentication](./ldap) +- [Adding support for additional username/password-based authentication backends](./authentication) +- [Customizing text in ArchivesSpace](./locales) +- [ArchivesSpace Plug-ins](./plugins) +- [Theming ArchivesSpace](./theming) +- [Managing frontend assets with Bower](./bower) +- [Adding custom reports](./reports) diff --git a/src/content/docs/de/customization/ldap.md b/src/content/docs/de/customization/ldap.md new file mode 100644 index 0000000..ca4ac29 --- /dev/null +++ b/src/content/docs/de/customization/ldap.md @@ -0,0 +1,70 @@ +--- +title: LDAP authentication +description: Instructions on how to manage and authenticate against one or more LDAP directories. +--- + +ArchivesSpace can manage its own user directory, but can also be +configured to authenticate against one or more LDAP directories by +specifying them in the application's configuration file. When a user +attempts to log in, each authentication source is tried until one +matches. + +Here is a minimal example of an LDAP configuration: + +```ruby +AppConfig[:authentication_sources] = [{ + :model => 'LDAPAuth', + :hostname => 'ldap.example.com', + :port => 389, + :base_dn => 'ou=people,dc=example,dc=com', + :username_attribute => 'uid', + :attribute_map => {:cn => :name}, +}] +``` + +With this configuration, ArchivesSpace performs authentication by +connecting to `ldap://ldap.example.com:389/`, binding anonymously, +searching the `ou=people,dc=example,dc=com` tree for `uid = <username>`. + +If the user is found, ArchivesSpace authenticates them by +binding using the password specified. Finally, the `:attribute_map` +entry specifies how LDAP attributes should be mapped to ArchivesSpace +user attributes (mapping LDAP's `cn` to ArchivesSpace's `name` in the +above example). + +Many LDAP directories don't support anonymous binding. To integrate +with such a directory, you will need to specify the username and +password of a user with permission to connect to the directory and +search for other users. Modifying the previous example for this case +looks like this: + +```ruby +AppConfig[:authentication_sources] = [{ + :model => 'LDAPAuth', + :hostname => 'ldap.example.com', + :port => 389, + :base_dn => 'ou=people,dc=example,dc=com', + :username_attribute => 'uid', + :attribute_map => {:cn => :name}, + :bind_dn => 'uid=archivesspace_auth,ou=system,dc=example,dc=com', + :bind_password => 'secretsquirrel', +}] +``` + +Finally, some LDAP directories enforce the use of SSL encryption. To +configure ArchivesSpace to connect via LDAPS, change the port as +appropriate and specify the `encryption` option: + +```ruby +AppConfig[:authentication_sources] = [{ + :model => 'LDAPAuth', + :hostname => 'ldap.example.com', + :port => 636, + :base_dn => 'ou=people,dc=example,dc=com', + :username_attribute => 'uid', + :attribute_map => {:cn => :name}, + :bind_dn => 'uid=archivesspace_auth,ou=system,dc=example,dc=com', + :bind_password => 'secretsquirrel', + :encryption => :simple_tls, +}] +``` diff --git a/src/content/docs/de/customization/locales.md b/src/content/docs/de/customization/locales.md new file mode 100644 index 0000000..f408128 --- /dev/null +++ b/src/content/docs/de/customization/locales.md @@ -0,0 +1,78 @@ +--- +title: Customizing text +description: Instructions for customizing text in ArchivesSpace using locale files, including how to override labels, messages, tooltips, and placeholders via the Rails I18n API. +--- + +ArchivesSpace has abstracted all the labels, messages and tooltips out of the +application into the locale files, which are part of the +[Rails Internationalization (I18n)](http://guides.rubyonrails.org/i18n.html) API. +The locales in this directory represent the +basis of translations for use by all Archives Space applications. Each +application may then add to or override these values with their own locales files. + +For a guide on managing these "i18n" files, please visit http://guides.rubyonrails.org/i18n.html + +You can see the source files for both the [Staff Frontend Application](https://github.com/archivesspace/archivesspace/tree/master/frontend/config/locales) and +[Public Application](https://github.com/archivesspace/archivesspace/tree/master/public/config/locales). There is also a [common locale file](https://github.com/archivesspace/archivesspace/blob/master/common/locales/en.yml) for some values used throughout the ArchivesSpace applications. + +The base translations are broken up: + +- The top most file "en.yml" contains the translations for all the record labels, messages and tooltips in English +- "enums/en.yml" contains the entries for the dynamic enumeration codes - add your translations to this file after importing your enumeration codes + +These values are pulled into the views using the I18n.t() method, like I18n.t("brand.welcome_message"). + +If the value you want to override is in the common locale file (like the "digital object title" field label, for example) , you can change this by simply editing the locales/en.yml file in your ArchivesSpace distribution home directory. A restart is required to have the changes take effect. + +If the value you want to change is in either the public or staff specific en.yml files, you can override these values using the plugins directory. For example, if you want to change the welcome message on the public frontend, make a file in your ArchivesSpace distribution called 'plugins/local/public/locales/en.yml' and put the following values: + +```yaml +en: + brand: + title: My Archive + home: Home + +welcome_message: HEY HEY HEY!! +``` + +If you restart ArchivesSpace, these values will take effect. + +If you are adding a new value you will also need to add the value into the Staff Frontend Application by clicking on the System dropdown menu and choosing Manage Controlled Value Lists. Select the list and add the value. If you restart ArchivesSpace the translation value that you set in the yml file should appear. + +If you're using a different language, simply swap out the en.yml for something else ( like fr.yml ) and update locale setting in the config.rb file ( i.e., AppConfig[:locale] = :fr ) + +## Tooltips + +To add a tooltip to a record label, simply add a new entry with "\_tooltip" +appended to the label's code. For example, to add a tooltip for the Accession's +Title field: + +```yaml +en: + accession: + title: Title + title_tooltip: | + <p>The title assigned to an accession or resource. The accession title + need not be the same as the resource title. Moreover, a title need not + be expressed for the accession record, as it can be implicitly + inherited from the resource record to which the accession is + linked.</p> +``` + +## Placeholders + +For text fields or text areas, you may like to have some placeholder text to be +displayed when the field is empty (for more details see +http://www.w3.org/html/wg/drafts/html/master/forms.html#the-placeholder-attribute). +Please note while most modern browser releases support this feature, +older version will not. + +To add a placeholder to a record's text field, add a new entry of the label's +code append with "\_placeholder". For example: + +```yaml +en: + accession: + title: Title + title_placeholder: See DACS 2.3.18-2.3.22 +``` diff --git a/src/content/docs/de/customization/plugins.md b/src/content/docs/de/customization/plugins.md new file mode 100644 index 0000000..c9c4f95 --- /dev/null +++ b/src/content/docs/de/customization/plugins.md @@ -0,0 +1,343 @@ +--- +title: Plugins +description: An overview of how to develop, structure, enable, and configure plugins in ArchivesSpace to customize application behavior, interface, branding, and search functionality without altering core code. +--- + +Plugins are a powerful feature, designed to allow you to change +most aspects of how the application behaves. + +Plugins provide a mechanism to customize ArchivesSpace by overriding or extending functions +without changing the core codebase. As they are self-contained, they also permit the ready +sharing of packages of customization between ArchivesSpace instances. + +The ArchivesSpace distribution comes with the `hello_world` exemplar plugin. Please refer to its [README file](https://github.com/archivesspace/archivesspace/blob/master/plugins/hello_world/README.md) for a detailed description of how it is constructed and implemented. + +You can find other examples in the following plugin repositories. The ArchivesSpace plugins that are officially supported and maintained by the ArchivesSpace Program Team are in archivesspace-plugins (https://github.com/archivesspace-plugins). Deprecated code which is no longer supported but has been kept for future reference is in archivesspace-deprecated (https://github.com/archivesspace-deprecated). There is an open/unmanaged GitHub repository where community members can share their code called archivesspace-labs (https://github.com/archivesspace-labs). The community developed Python library for interacting with the ArchivesSpace API, called ArchivesSnake, is managed in the archivesspace-labs repository. + +## Enabling plugins + +Plugins are enabled by placing them in the `plugins` directory, and referencing them in the +ArchivesSpace configuration, `config/config.rb`. For example: + +```ruby +AppConfig[:plugins] = ['local', 'hello_world', 'my_plugin'] +``` + +This configuration assumes the following directories exist: + + plugins + hello_world + local + my_plugin + +Note that the order that the plugins are listed in the `:plugins` configuration option +determines the order in which they are loaded by the application. + +## Plugin structure + +The directory structure within a plugin is similar to the structure of the core application. +The following shows the supported plugin structure. Files contained in these directories can +be used to override or extend the behavior of the core application. + + backend + controllers ......... backend endpoints + model ............... database mapping models + converters .......... classes for importing data + job_runners ......... classes for defining background jobs + plugin_init.rb ...... if present, loaded when the backend first starts + lib/bulk_import ..... bulk import processor + frontend + assets .............. static assets (such as images, javascript) in the staff interface + controllers ......... controllers for the staff interface + locales ............. locale translations for the staff interface + views ............... templates for the staff interface + plugin_init.rb ...... if present, loaded when the staff interface first starts + public + assets .............. static assets (such as images, javascript) in the public interface + controllers ......... controllers for the public interface + locales ............. locale translations for the public interface + views ............... templates for the public interface + plugin_init.rb ...... if present, loaded when the public interface first starts + migrations ............ database migrations + schemas ............... JSONModel schema definitions + search_definitions.rb . Advanced search fields + +**Note** that `backend/lib/bulk_import` is the only directory in `backend/lib/` that is loaded by the plugin manager. Other files in `backend/lib/` will not be loaded during startup. + +**Note** that, in order to override or extend the behavior of core models and controllers, you cannot simply put your replacement with the same name in the corresponding directory path. Core models and controllers can be overridden by adding an `after_initialize` block to `plugin_init.rb` (e.g. [aspace-hvd-pui](https://github.com/harvard-library/aspace-hvd-pui/blob/master/public/plugin_init.rb#L43)). + +## Overriding behavior + +A general rule is: to override behavior, rather then extend it, match the path +to the file that contains the behavior to be overridden. + +It is not necessary for a plugin to have all of these directories. For example, to override +some part of a locale file for the staff interface, you can just add the following structure +to the local plugin: + + plugins/local/frontend/locales/en.yml + +More detailed information about overriding locale files is found in [Customizing text in ArchivesSpace](/customization/locales) + +## Overriding the visual (web) presentation + +You can directly override any view file in the core application by placing an erb file of the same name in the analogous path. +For example, if you want to override the appearance of the "Welcome" [home] page of the Public User Interface, you can make your changes to a file `show.html.erb` and place it at `plugins/my_fine_plugin/public/views/welcome/show.html.erb`. (Where _my_fine_plugin_ is the name of your plugin) + +### Implementing a broadly-applied style or javascript change + +Unless you want to write inline style or javascript (which may be practiceable for a template or two), best practice is to create `plugins/my_fine_plugin/public/views/layout_head.html.erb` or `plugins/my_fine_plugin/frontend/views/layout_head.html.erb`, which contains the HTML statements to incorporate your javascript or css into the `<HEAD>` element of the template. Here's an example: + +- For the public interface, I want to change the size of the text in all links when the user is hovering. + - I create `plugins/my_fine_plugin/public/assets/my.css`: + ```css + a:hover { + font-size: 2em; + } + ``` + - I create `plugins/my_fine_plugin/public/views/layout_head.html.erb`, and insert: + ```ruby + <%= stylesheet_link_tag "#{@base_url}/assets/my.css", media: :all %> + ``` +- For the public interface, I want to add some javascript behavior such that, when the user hovers over a list item, astericks appear + - I create `plugins/my_fine_plugin/public/assets/my.js`" + ```javascript + $(function () { + $('li').hover( + function () { + $(this).append($('<span> ***</span>')) + }, + function () { + $(this).find('span:last').remove() + } + ) + }) + ``` + - I add to `plugins/my_fine_plugin/public/views/layout_head.html.erb`: + ```ruby + <%= javascript_include_tag "#{@base_url}/assets/my.js" %> + ``` + +## Adding your own branding + +Another example, to override the branding of the staff interface, add +your own template at: + + plugins/local/frontend/views/site/\_branding.html.erb + +Files such as images, stylesheets and PDFs can be made available as static resources by +placing them in an `assets` directory under an enabled plugin. For example, the following file: + + plugins/local/frontend/assets/my_logo.png + +Will be available via the following URL: + + http://your.frontend.domain.and:port/assets/my_logo.png + +For example, to reference this logo from the custom branding file, use +markup such as: + +```erb + <div class="container branding"> + <img src="<%= #{AppConfig[:frontend_proxy_prefix]} %>assets/my_logo.png" alt="My logo" /> + </div> +``` + +## Customizing the favicon + +A favicon is an icon associated with a web page that browser and operating systems display (ie: in a browser's address bar or tab, next to the web page name in a bookmark list, etc.). + +### Default images + +The ArchivesSpace favicons are stored in the top-level `public/` directory of the frontend and public applications. + +1. `frontend/public/favicon-AS.png` +2. `frontend/public/favicon-AS.svg` +3. `public/public/favicon-AS.png` +4. `public/public/favicon-AS.svg` + +### Markup + +Favicon markup is found in each application's favicon partial template: + +1. `frontend/app/views/site/\_favicon.html.erb` +2. `public/app/views/shared/\_favicon.html.erb` + +### Configuration + +Favicons are shown by default via the configuration options in `config.rb` (or `common/config/config-defaults.rb` in development). Set the respective option to `false` to not show a favicon. + +```rb +# config.rb +AppConfig[:pui_show_favicon] = true # whether or not to show a favicon +AppConfig[:frontend_show_favicon] = true # whether or not to show a favicon +``` + +### Plugin examples + +Replace the default favicon with your own via a plugin. + +:::caution[Reserved favicon filenames] +Custom favicon files must be named something other than `favicon-AS.png` and `favicon-AS.svg` in order to override the default favicon. +::: + +#### Frontend + +The frontend plugin should have the following directory structure: + +``` +plugins/local/frontend/ +├── assets +│   ├── favicon.png +│   └── favicon.svg +└── views + └── site + └── _favicon.html.erb +``` + +The frontend favicon template should look something like: + +```erb +<!-- plugins/local/frontend/views/site/_favicon.html.erb --> +<link rel="icon" type="image/png" href="<%= AppConfig[:frontend_proxy_prefix] %>assets/favicon.png"> +<link rel="icon" type="text/svg+xml" href="<%= AppConfig[:frontend_proxy_prefix] %>assets/favicon.svg"> +``` + +#### Public + +The public plugin should have the following directory structure: + +``` +plugins/local/public/ +├── assets +│   ├── favicon.png +│   └── favicon.svg +└── views + └── shared + └── _favicon.html.erb +``` + +The public favicon template should look something like: + +```erb +<!-- plugins/local/public/views/shared/_favicon.html.erb --> +<link rel="icon" type="image/png" href="<%= asset_path('favicon.png', skip_pipeline: true) %>"> +<link rel="icon" type="image/svg+xml" href="<%= asset_path('favicon.svg', skip_pipeline: true) %>"> +``` + +## Plugin configuration + +Plugins can optionally contain a configuration file at `plugins/[plugin-name]/config.yml`. +This configuration file supports the following options: + + system_menu_controller + The name of a controller that will be accessible via a Plugins menu in the System toolbar + repository_menu_controller + The name of a controller that will be accessible via a Plugins menu in the Repository toolbar + parents + [record-type] + name + cardinality + ... + +`system_menu_controller` and `repository_menu_controller` specify the names of frontend controllers +that will be accessible via the system and repository toolbars respectively. A `Plugins` dropdown +will appear in the toolbars if any enabled plugins have declared these configuration options. The +controller name follows the standard naming conventions, for example: + +```ruby +repository_menu_controller: hello_world +``` + +Points to a controller file at `plugins/hello_world/frontend/controllers/hello_world_controller.rb` +which implements a controller class called `HelloWorldController`. When the menu item is selected +by the user, the `index` action is called on the controller. + +Note that the URLs for plugin controllers are scoped under `plugins`, so the URL for the above +example is: + + http://your.frontend.domain.and:port/plugins/hello_world + +Also note that the translation for the plugin's name in the `Plugins` dropdown menu is specified +in a locale file in the `frontend/locales` directory in the plugin. For example, in the `hello_world` +example there is an English locale file at: + + plugins/hello_world/frontend/locales/en.yml + +The translation for the plugin name in the `Plugins` dropdown menus is specified by the key `label` +under the plugin, like this: + +```yaml +en: + plugins: + hello_world: + label: Hello World +``` + +Note that the example locale file contains other keys that specify translations for text displayed +as part of the plugin's user interface. Be sure to place your plugin's translations as shown, under +`plugins.[your_plugin_name]` in order to avoid accidentally overriding translations for other +interface elements. In the example above, the translation for the `label` key can be referenced +directly in an erb view file as follows: + +```ruby +<%= I18n.t("plugins.hello_world.label") %> +``` + +Each entry under `parents` specifies a record type that this plugin provides a new subrecord for. +`[record-type]` is the name of the existing record type, for example `accession`. `name` is the +name of the plugin in its role as a subrecord of this parent, for example `hello_worlds`. +`cardinality` specifies the cardinality of the plugin records. Currently supported values are +`zero-to-many` and `zero-to-one`. + +## Changing search behavior + +A plugin can add additional fields to the advanced search interface by +including a `search_definitions.rb` file at the top-level of the +plugin directory. This file can contain definitions such as the +following: + +```ruby +AdvancedSearch.define_field(:name => 'payment_fund_code', :type => :enum, :visibility => [:staff], :solr_field => 'payment_fund_code_u_utext') +AdvancedSearch.define_field(:name => 'payment_authorizers', :type => :text, :visibility => [:staff], :solr_field => 'payment_authorizers_u_utext') +``` + +Each field defined will appear in the advanced search interface as a +searchable field. The `:visibility` option controls whether the field +is presented in the staff or public interface (or both), while the +`:type` parameter determines what sort of search is being performed. +Valid values are `:text:`, `:boolean`, `:date` and `:enum`. Finally, +the `:solr_field` parameter controls which field is used from the +underlying index. + +## Adding Custom Reports + +Custom reports may be added to plugins by adding a new report model as a subclass of `AbstractReport` to `plugins/[plugin-name]/backend/model/`, and the translations for said model to `plugins/[plugin-name]/frontend/locales/[language].yml`. Look to existing reports in reports subdirectory of the ArchivesSpace base directory for examples of how to structure a report model. + +There are several limitations to adding reports to plugins, including that reports from plugins may only use the generic report template. ArchivesSpace only searches for report templates in the reports subdirectory of the ArchivesSpace base directory, not in plugin directories. If you would like to implement a custom report with a custom template, consider adding the report to `archivesspace/reports/` instead of `archivesspace/plugins/[plugin-name]/backend/model/`. + +## Frontend Specific Hooks + +To make adding new records fields and sections to record forms a little eaiser via your plugin, the ArchivesSpace frontend provides a series of hooks via the `frontend/config/initializers/plugin.rb` module. These are as follows: + +- `Plugins.add_search_base_facets(*facets)` - add to the base facets list to include extra facets for all record searches and listing pages. + +- `Plugins.add_search_facets(jsonmodel_type, *facets)` - add facets for a particular JSONModel type to be included in searches and listing pages for that record type. + +- `Plugins.add_resolve_field(field_name)` - use this when you have added a new field/relationship and you need it to be resolved when the record is retrieved from the API. + +- `Plugins.register_edit_role_for_type(jsonmodel_type, role)` - when you add a new top level JSONModel, register it and its edit role so the listing view can determine if the "Edit" button can be displayed to the user. + +- `Plugins.register_note_types_handler(proc)` where proc handles parameters `jsonmodel_type, note_types, context` - allow a plugin to customize the note types shown for particular JSONModel type. For example, you can filter those that do not apply to your institution. + +- `Plugins.register_plugin_section(section)` - allows you define a template to be inserted as a section for a given JSONModel record. A section is a type of `Plugins::AbstractPluginSection` which defines the source `plugin`, section `name`, the `jsonmodel_types` for which the section should show and any `opts` required by the templates at the time of render. These new sections (readonly, edit and sidebar additions) are output as part of the `PluginHelper` render methods. + + `Plugins::AbstractPluginSection` can be subclassed to allow flexible inclusion of arbitrary HTML. There are two examples provided with ArchivesSpace: + - `Plugins::PluginSubRecord` - uses the `shared/subrecord` partial to output a standard styled ArchivesSpace section. `opts` requires the jsonmodel field to be defined. + + - `Plugins::PluginReadonlySearch` - uses the `search/embedded` partial to output a search listing as a section. `opts` requires the custom filter terms for this search to be defined. + +## Further information + +**Be sure to test your plugin thoroughly as it may have unanticipated impacts on your +ArchivesSpace application.** diff --git a/src/content/docs/de/customization/reports.md b/src/content/docs/de/customization/reports.md new file mode 100644 index 0000000..343513a --- /dev/null +++ b/src/content/docs/de/customization/reports.md @@ -0,0 +1,51 @@ +--- +title: Reports +description: Instructions for creating custom reports and subreports in ArchivesSpace, including required structure, SQL usage, translations, optional customization methods, and integration with the reporting framework. +--- + +Adding a report is intended to be a fairly simple process. The requirements for creating a report are outlined below. + +## Adding a Report + +### Required + +- Create a class for your report that is a subclass of AbstractReport. +- Call register_report. If your report has any parameters, specify them here. +- Implement query_string + - This should be a raw SQL string + - To prevent SQL injection, use db.literal for any user input i.e. use `"select * from table where column = #{db.literal(value)}" ` instead of `"select * from table where column = '#{value}'"` +- Provide translations for column headers and the title of your report + - They should be in yml files under _language_.reports._report name_ + - The translation for title should be whatever you want the name of the report to be. + - If the translation you want is already in _language_.reports.translation_defaults (found in the static folder) you do not need to specify it. + - Translations specific to the individual report are given priority over translation defaults. + +### Optional + +- Implement your own initializer if your report has any parameters. +- Implement fix_row in order to clean up data and add subreports. + - Each result will be passed to fix_row as a hash + - ReportUtils offers various class methods to simplify cleaning up data. + - You can also add subreports here with something like `row[:subreport_name] = SubreportClassName.new(self, row[:id]).get_content` where row is the result as a hash which was a parameter to fix_row. See [Adding a Subreport](#adding-a-subreport) for more information on adding subreports. + - Sometimes you will want to delete something from the result that you needed in order to generate a subreport but do not want to show up in the final report (such as id). To do this use `row.delete(:id)`. +- Special implementation of query - The default implementation is simply `db.fetch(query_string)` but implementing it yourself may give you more flexibility. In the end, it needs to return a result set. +- There is a hash called info that controls what shows up in the header at the top of the report. Examples may include total record count, total extent, or any parameters that are provided by the user for your report. Add anything you want to show up in the report header to info. Repository name will be included automatically. Be sure to provide translations for the keys you add to info. +- after_tasks is run after fix_row executes on all the results. Implement this if you have anything that needs to get done here before the report is rendered +- Specify identifier_field if you want to add a heading to each individual record. For instance, identifier_field might be `:accession_number` for a report on accessions. +- Implement page_break to be false if you do not want a page break after each record in the PDF of the report. +- Implement special_translation if there is anything you want translate in a special way (i.e. it can't be accomplished by the yml file). + +## Adding A Subreport + +### Required + +- Create a class for your subreport that is a subclass of AbstractSubreport. +- Create an initializer that takes in the parent report/subreport as well as any parameters you need to run the subreport (usually this is just an id from the result in the parent report/subreport). Your initializer should call `super(parent_report)`. +- Implement query_string. This works the same way as it does for reports. +- Provide necessary translations. + +### Optional + +- Special implementation of query +- fix_row works just like in reports + - note that you can add nested subreports diff --git a/src/content/docs/de/customization/theming.md b/src/content/docs/de/customization/theming.md new file mode 100644 index 0000000..9e15c0a --- /dev/null +++ b/src/content/docs/de/customization/theming.md @@ -0,0 +1,141 @@ +--- +title: Theming +description: A guide to customizing the look and feel of ArchivesSpace using plugins or full theme rebuilds, including instructions for changing logos, CSS, and layout elements in both the public and staff interfaces. +--- + +## Making small changes + +It's easiest to use a plugin for small changes to your site's theme. With a plugin, +we can override default views, controllers, models, etc. without having to do a +complete rebuild of the source code. Be sure to remove the `#` at the beginning of +any line that you want to change. Any line that starts with a `#` is ignored. + +Let's say we wanted to change the branding logo on the public +interface. That can be easily changed in your `config.rb` file: + +```ruby +AppConfig[:pui_branding_img] +``` + +That setting is used by the file found in `public/app/views/shared/_header.html.erb` to display your PUI side logo. You don't need to change that file, only the setting in your `config.rb` file. + +You can store the image in `plugins/local/public/assets/images/logo.png` You'll most likely need to create one or more of the directories. + +Your `AppConfig[:pui_branding_img]` setting should look something like this: + +```ruby +AppConfig[:pui_branding_img] = '/assets/images/logo.png' +``` + +Alt text for the PUI branding image can and should also be supplied via: + +```ruby +AppConfig[:pui_branding_img_alt_text] = 'My alt text' +``` + +Be sure to remove the `#` at the beginning of +any line that you want to change. Any line that starts with a `#` is ignored. + +If you want your image on the PUI to link out to another location, you will need to make a change to the file `public/app/views/shared/_header.html.erb`. The line that creates the logo just needs a `a href` added. You should also alter `AppConfig[:pui_branding_img_alt_text]` to make it clear that the image also functions as a link (e.g. `AppConfig[:pui_branding_img_alt_text] = 'Back to Example College Home'`). That will end up looking something like this: + +```erb +<div class="col-sm-3 hidden-xs"><a href="https://example.com"><img class="logo" src="<%= asset_path(AppConfig[:pui_branding_img]) %>" alt="<%= AppConfig[:pui_branding_img_alt_text] %>" /></a></div> +``` + +The Staff Side logo will need a small plugin file and cannot be set in your `config.rb` file. This needs to be changed in the `plugins/local/frontend/views/site/_branding.html.erb` file. You'll most likely need to create one or more of the directories. Then create that `_branding.html.erb` file and paste in the following code: + +```erb +<div class="container-fluid navbar-branding"> + <%= image_tag "archivesspace/archivesspace.small.png", :class=>"img-responsive", :alt=>"My image alt text" %> +</div> +``` + +Change the `"archivesspace/archivesspace.small.png"` to the path to your image `/assets/images/logo.png` and place your logo in the `plugins/local/frontend/assets/images/` directory. You'll most likely need to create one or more of the directories. + +**Note:** Since anything we add to plugins directory will not be precompiled by +the Rails asset pipeline, we cannot use some of the tag helpers +(like img_tag ), since that's assuming the asset is being managed by the +asset pipeline. + +Restart the application and you should see your logo in the default view. + +## Adding CSS rules + +You can customize CSS through the plugin system too. If you don't want to create +a whole new plugin, the easiest way is to modify the 'local' plugin that ships +with ArchivesSpace (it's intended for these kind of site-specific changes). As +long as you've still got 'local' listed in your AppConfig[:plugins] list, your +changes will get picked up. + +To do that, create a file called +`archivesspace/plugins/local/frontend/views/layout_head.html.erb` for the staff +side or `archivesspace/plugins/local/public/views/layout_head.html.erb` for the +public. Then you can add the line to include the CSS in the site: + +```erb +<%= stylesheet_link_tag "#{@base_url}/assets/custom.css" %> +``` + +Then place your CSS in the file: + + staff side: + archivesspace/plugins/local/frontend/assets/custom.css + or public side: + archivesspace/plugins/local/public/assets/custom.css + +and it will get loaded on each page. + +You may also want to make changes to the main index page, or the header and +footer. Those overrides would go into the following places for the public side +of your site: + + archivesspace/plugins/local/public/views/welcome/show.html.erb + archivesspace/plugins/local/public/views/shared/_header.html.erb + archivesspace/plugins/local/public/views/shared/_footer.html.erb + +## Heavy re-theming + +If you're wanting to really trick out your site, you could do this in a plugin +using the override methods shown above, although there are some big disadvantages +to this. The first is that assets will not be compiled by the Rails asset +pipeline. Another is that you won't be able to take advantage of the variables +and mixins that Bootstrap and Less provide as a framework, which really helps +keep your assets well organized. + +A better way to do this is to pull down a copy of the ArchivesSpace code and +build out a new theme. A good resource on how to do this is +[this video](https://www.youtube.com/watch?v=Uny736mZVnk) . +This video covers the staff frontend UI, but the same steps can be applied to +the public UI as well. + +Also become a little familiar with the +[build system instructions ](/development/dev) + +First, pull down a new copy of ArchivesSpace using git and be sure to checkout +a tag matching the version you're using or wanting to use. + +```shell +$ git clone https://github.com/archivesspace/archivesspace.git +$ git checkout v2.5.2 +``` + +You can start your application development server by executing: + +```shell +$ ./build/run bootstrap +$ ./build/run backend:devserver +$ ./build/run frontend:devserver +$ ./build/run public:devserver +``` + +**Note:** You don't have to run all these commands all the time. The bootstrap +command really only has to be run the first time your pull down the code -- +it will also take awhile. You also don't have to start the frontend or public +if you're not working on those interfaces. Backend does have to be started for +either the public or frontend interfaces to work. ) + +Follow the instructions in the video to create a new theme. A good way is to copy the existing default theme to a new folder and start making your updates. Be sure to take advantage of the existing variables set in the Less files to make your assets nice and organized. + +Once you've updated you theme and have got it working, you can package your application. You can use the ./scripts/build_release to build a totally fresh AS distribution, but you don't need to do that if you've simply made some minor changes to the UI. Instead, use the "./build/run public:war " to compile your assets and package a war file. You can then take this public.war file and replace your ASpace distribution war file. + +Be sure to update your theme setting in the config.rb file and restart ASpace. diff --git a/src/content/docs/de/customization/xsl.md b/src/content/docs/de/customization/xsl.md new file mode 100644 index 0000000..5ed0605 --- /dev/null +++ b/src/content/docs/de/customization/xsl.md @@ -0,0 +1,17 @@ +--- +title: XSL stylesheets +description: Provides information about the XSL stylesheets for transforming ArchivesSpace data to EAC-CPF and EAD exports into HTML or PDF, using Saxon for processing. +--- + +ArchivesSpace includes three stylesheets for you to transform exported data +into human-friendly formats. The stylesheets included are as follows: + +- `as-eac-cpf-html.xsl`: Generates HTML from EAC-CPF records +- `as-ead-html.xsl`: Generates HTML from EAD records +- `as-ead-pdf.xsl`: Generates XSL:FO output from EAD for transformation into PDF + +These stylesheets have been tested and are known to work with +[Saxon](http://saxonica.com/download/download_page.xml) 9.5.1.1 and higher. + +The `as-helper-functions.xsl` stylesheet is required by the other three +stylesheets listed above. diff --git a/src/content/docs/de/development/dev.md b/src/content/docs/de/development/dev.md new file mode 100644 index 0000000..b33f69d --- /dev/null +++ b/src/content/docs/de/development/dev.md @@ -0,0 +1,495 @@ +--- +title: Development environment +description: Guidance for setting up a development environment or ArchivesSpace, including system requirements, supported development platforms, a quickstart guide, and step-by-step instructions. +--- + +System requirements: + +- Java 17 +- [Docker](https://www.docker.com/) & [Docker Compose](https://docs.docker.com/compose/) is optional but makes running MySQL and Solr more convenient +- [Supervisord](http://supervisord.org/) is optional but makes running the development servers more convenient +- [mysql-client](https://www.bytebase.com/reference/mysql/how-to/how-to-install-mysql-client-on-mac-ubuntu-centos-windows/) is required in order to load demo data or other sql dumps onto the database + +Currently supported platforms for development: + +- Linux (although generally only Ubuntu is actually used / tested) +- macOS on Intel (x86_64) +- macOS on Apple silicon (ARM64) _since v4.0.0_ + +:::note[Apple silicon and ArchivesSpace before v4.0.0] +To install versions of ArchivesSpace prior to v4.0.0 with macOS on Apple silicon, see [https://teaspoon-consulting.com/articles/archivesspace-on-the-m1.html](https://teaspoon-consulting.com/articles/archivesspace-on-the-m1.html). +::: + +:::danger[Windows development not supported] +Windows is not supported because of issues building gems with C extensions (such as sassc). +::: + +When installing Java, [OpenJDK](https://openjdk.org/) is strongly recommended. Other vendors may work, but OpenJDK is most extensively used and tested. It is highly recommended that you use a version manager such as [mise](https://mise.jdx.dev/lang/java.html) to install Java (OpenJDK). This has proven to be a reliable way of resolving cross platform issues that have occured via other means of installing Java. + +Installing OpenJDK with mise will look something like: + +```bash +mise use -g java@openjdk-17 +``` + +On Linux/Ubuntu it is generally fine to install from system packages: + +```bash +sudo apt install openjdk-$VERSION-jdk-headless +# example: install 17 +sudo apt install openjdk-17-jdk-headless +# update-java-alternatives can be used to switch between versions +sudo update-java-alternatives --list +sudo update-java-alternatives --set $version +``` + +For [Homebrew](https://brew.sh/) users (macOS, Linux), the OpenJDK distribution from Azul has been reported to work: + +```bash +# install Java v17 for example +brew install --cask zulu@17 +``` + +If using Docker & Docker Compose install them following the official documentation: + +- [https://docs.docker.com/get-docker/](https://docs.docker.com/get-docker/) +- [https://docs.docker.com/compose/install/](https://docs.docker.com/compose/install/) + +_Do not use system packages or any other unofficial source as these have been found to be inconsistent with standard Docker._ + +The recommended way of developing ArchivesSpace is to fork the repository and clone it locally. + +_Note: all commands in the following instructions assume you are in the root directory of your local fork +unless otherwise specified._ + +**Quickstart** + +This is an abridged reference for getting started with a limited explanation of the steps: + +```bash +# Build images (required one time only for most use cases) +docker-compose -f docker-compose-dev.yml build +# Run MySQL and Solr in the background +docker-compose -f docker-compose-dev.yml up --detach +# Download the MySQL connector +cd ./common/lib && wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.30/mysql-connector-java-8.0.30.jar && cd - +# Download all application dependencies +./build/run bootstrap +# OPTIONAL: load dev database +gzip -dc ./build/mysql_db_fixtures/demo.sql.gz | mysql --host=127.0.0.1 --port=3306 -u root -p123456 archivesspace +# Setup the development database +./build/run db:migrate +# Clear out any existing Solr state (only needed after a database setup / restore after previous development) +./build/run solr:reset +# Run the development servers +supervisord -c supervisord/archivesspace.conf +# OPTIONAL: Run a backend (api) test (for checking setup is correct) +./build/run backend:test -Dexample="User model" +``` + +## Step by Step explanation + +### Run MySQL and Solr + +ArchivesSpace development requires MySQL and Solr to be running. The easiest and +recommended way to run them is using the Docker Compose configuration provided by ArchivesSpace. + +Start by building the images. This creates a custom Solr image that includes ArchivesSpace's configuration: + +```bash +docker-compose -f docker-compose-dev.yml build +``` + +_Note: you only need to run the above command once. You would only need to rerun this command if a) +you delete the image and therefore need to recreate it, or b) you make a change to ArchivesSpace's Solr +configuration and therefore need to rebuild the image to include the updated configuration._ + +Run MySQL and Solr in the background: + +```bash +docker-compose -f docker-compose-dev.yml up --detach +``` + +By using Docker Compose to run MySQL and Solr you are guaranteed to have the correct connection settings +and don't otherwise need to define connection settings for MySQL or Solr. + +Verify that MySQL & Solr are running: `docker ps`. It should list the running containers: + +```txt +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +ec76bd09d73b mysql:8.0 "docker-entrypoint.s…" 8 hours ago Up 8 hours 33060/tcp, 0.0.0.0:3307->3306/tcp as_test_db +30574171530f archivesspace/solr:latest "docker-entrypoint.s…" 8 hours ago Up 8 hours 0.0.0.0:8984->8983/tcp as_test_solr +d84a6a183bb0 archivesspace/solr:latest "docker-entrypoint.s…" 8 hours ago Up 8 hours 0.0.0.0:8983->8983/tcp as_dev_solr +7df930293875 mysql:8.0 "docker-entrypoint.s…" 8 hours ago Up 8 hours 0.0.0.0:3306->3306/tcp, 33060/tcp as_dev_db +``` + +To check the servers are online: + +- MYSQL: `mysql -h 127.0.0.1 -u as -pas123 archivesspace` +- SOLR: `curl http://localhost:8983/solr/admin/cores` + +To stop and / or remove the servers: + +```bash +docker-compose -f docker-compose-dev.yml stop # shutdowns the servers (data will be preserved) +docker-compose -f docker-compose-dev.yml rm # deletes the containers (all data will be removed) +``` + +**Advanced: running MySQL and Solr outside of Docker** + +You are not required to use Docker for MySQL and Solr. If you run them another way the default +requirements are: + +- dev MySQL, localhost:3306 create db: archivesspace, username: as, password: as123 +- test MySQL, localhost:3307 create db: archivesspace, username: as, password: as123 +- dev Solr, localhost:8983 create archivesspace core using ArchivesSpace configuration +- test Solr, localhost:8984, create archivesspace core using ArchivesSpace configuration + +The defaults can be changed using [environment variables](https://github.com/archivesspace/archivesspace/blob/master/build/build.xml#L43-L46) located in the build file. + +### Download the MySQL connector + +For licensing reasons the MySQL connector must be downloaded separately: + +```bash +cd ./common/lib +wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.30/mysql-connector-java-8.0.30.jar +cd - +``` + +### Run bootstrap + +The bootstrap task: + + ./build/run bootstrap + +Will bootstrap your development environment by downloading all +dependencies--JRuby, Gems, etc. This one command creates a fully +self-contained development environment where everything is downloaded +within the ArchivesSpace project `build` directory. + +_It is not necessary and generally incorrect to manually install JRuby +& bundler etc. for ArchivesSpace (whether with a version manager or +otherwise)._ + +_The self-contained ArchivesSpace development environment typically does +not interact with other J/Ruby environments you may have on your system +(such as those managed by rbenv or similar)._ + +This is the starting point for all ArchivesSpace development. You may need +to re-run this command after fetching updates, or when making changes to +Gemfiles or other dependencies such as those in the `./build/build.xml` file. + +**Errors running bootstrap** + +```txt + [java] INFO: jetty-9.4.44.v20210927; built: 2021-09-27T23:02:44.612Z; git: 8da83308eeca865e495e53ef315a249d63ba9332; jvm 11+28 + [java] Exiting + [java] LoadError: no such file to load -- rails/commands + [java] require at org/jruby/RubyKernel.java:974 + [java] <main> at script/rails:8 +``` + + ./build/run backend:devserver + ./build/run frontend:devserver + ./build/run public:devserver + ./build/run indexer + +There have been various forms of the same `LoadError`. It's a transient error +that is resolved by rerunning bootstrap. + +```txt + [java] org.jruby.Main -I uri:classloader://META-INF/jruby.home/lib/ruby/stdlib -r + [java] ./siteconf20220407-5224-13f6qi7.rb extconf.rb + [java] sh: /Library/Internet: No such file or directory + [java] sh: line 0: exec: /Library/Internet: cannot execute: No such file or directory + [java] + [java] extconf failed, exit code 126 +``` + +This has been seen on Mac platforms resulting from the installation method +for Java. Installing the OpenJDK via Jabba has been effective in resolving +this error. + +**Advanced: bootstrap & the build directory** + +Running bootstrap will download jars to the build directory, including: + +- jetty-runner +- jruby +- jruby-rack + +Gems will be downloaded to: `./build/gems/jruby/$version/gems/`. + +### Setup the development database + +The migrate task: + +```bash +./build/run db:migrate +``` + +Will setup the development database, creating all of the tables etc. +required by the application. + +There is a task for resetting the database: + +```bash +./build/run db:nuke +``` + +Which will first delete then migrate the database. + +### Loading data fixtures into dev database + +When loading a database into the development MySQL instance always ensure that ArchivesSpace +is **not** running. Stop ArchivesSpace if it is running. Run `./build/run solr:reset` to +clear indexer state (a more thorough explanation of this step is described below). + +If you are loading a database and MySQL has already been used for development you'll want to +drop and create an empty database first. + +```bash +mysql -h 127.0.0.1 -u as -pas123 -e "DROP DATABASE archivesspace" +mysql -h 127.0.0.1 -u as -pas123 -e "CREATE DATABASE IF NOT EXISTS archivesspace DEFAULT CHARACTER SET utf8mb4" +``` + +_Note: you can skip the above step if MySQL was just started for the first time or any time you +have an empty ArchivesSpace (one where `db:migrate` has not been run)._ + +Assuming you have MySQL running and an empty `archivesspace` database available you can proceed +to restore: + +```bash +gzip -dc ./build/mysql_db_fixtures/blank.sql.gz | mysql --host=127.0.0.1 --port=3306 -u root -p123456 archivesspace +./build/run db:migrate +``` + +_Note: The above instructions should work out-of-the-box. If you want to use your own database +and / or have configured MySQL differently then adjust the commands as needed._ + +After the restore `./build/run db:migrate` is run to catch any migration updates. You can now +proceed to run the application dev servers, as described below, with data already +populated in ArchivesSpace. + +### Clear out existing Solr state + +The Solr reset task: + +```bash +./build/run solr:reset +``` + +Will wipe out any existing Solr state. This is not required when setting +up for the first time, but is often required after a database reset (such as +after running the `./build/run db:nuke` task). + +_More specifically what this does is submit a delete all request to Solr and empty +out the contents of the `./build/dev/indexer*_state` directories, which is described +below._ + +### Run the development servers + +Use [Supervisord](http://supervisord.org/) for a simpler way of running the development servers with output +for all servers sent to a single terminal window: + +```bash +# run all of the services +supervisord -c supervisord/archivesspace.conf + +# run in api mode (backend + indexer only) +supervisord -c supervisord/api.conf + +# run just the backend (useful for trying out endpoints that don't require Solr) +supervisord -c supervisord/backend.conf +``` + +ArchivesSpace is started with: + +- the staff interface on [http://localhost:3000/](http://localhost:3000/) +- the PUI on [http://localhost:3001/](http://localhost:3001/) +- the API on [http://localhost:4567/](http://localhost:4567/) + +To stop supervisord: `Ctrl-c`. + +#### Advanced: running the development servers directly + +Supervisord is not required, or ideal for every situation. You can run the development +servers directly via build tasks: + +```bash +./build/run backend:devserver # This is the REST API +./build/run frontend:devserver # This is the staff user interface +./build/run public:devserver # This is the public user interface +./build/run indexer # This is the indexer (converts ASpace records to Solr Docs and ships to Solr) +``` + +These should be run in different terminal sessions and do not need to be run +in a specific order or are all required. + +_An example use case for running a server directly is to use the pry debugger._ + +#### Advanced: debugging with pry + +To debug with pry you cannot use supervisord to run the application devserver, +however you can mix and match: + +```bash +# run the backend and indexer with supervisord +supervisord -c supervisord/api.conf + +# in a separate terminal run the frontend directly +./build/run frontend:devserver +``` + +Add `require 'pry-debugger-jruby'; binding.pry` to set breakpoints in the code. This can also be used in views: +`<% require 'pry-debugger-jruby'; binding.pry %>`. Using pry you can easily inspect the `request`, `params` and +in scope instance variables that are available. Typical debugger commands are available: + +- `step`: Step execution into the next line or method. Takes an optional numeric argument to step multiple times. +- `next`: Step over to the next line within the same frame. Takes an optional numeric argument to step multiple times. Differs from step in that it always stays within the same frame (e.g. does not go into other method calls). +- `finish`: Execute until current stack frame returns. +- `continue`: Continue program execution and end the Pry session. +- `puts caller.join("\n")`: Get the current stacktrace. + +See also [pry-debugger-jruby docs](https://gitlab.com/ivoanjo/pry-debugger-jruby). + +#### Advanced: development servers and the build directory + + ./build/run db:migrate + +Running the developments servers will create directories in `./build/dev`: + +- indexer_pui_state: latest timestamps for PUI indexer activity +- indexer_state: latest timestamps for (SUI) indexer activity +- shared: background job files + + ./build/run db:nuke + +_Note: the folders will be created as they are needed, so they may not all be present +at all times._ + +#### Accessing development servers from other devices on the local network + +You can access the ArchivesSpace development servers from other devices on your local network. This is especially useful for testing on mobile operating systems. + +##### Prerequisites + +1. Your development machine and the other device must be on the same WiFi network +2. The ArchivesSpace development servers must be running on the development machine + +##### Steps + +1. Get your development machine's local IP address + + On macOS: + + ```bash + ipconfig getifaddr en0 + ``` + + On Linux: + + ```bash + hostname -I | awk '{print $1}' + ``` + + This returns something like `134.192.0.47`. + +2. Start the [development servers](#run-the-development-servers) + + The development servers bind to `0.0.0.0` by default, making them accessible from other devices on the network (see the [frontend binding example](https://github.com/archivesspace/archivesspace/blob/f77dec627cd1feac77e4b67f9242d617efe80e94/build/build.xml#L899)). + +3. **Access from another device** + + On the other device, open a web browser and navigate to your development machine's IP address with the appropriate port, ie: `http://<your-local-ip>:<port>/`. + + So for IP address `134.192.0.47`: + - Staff interface: `http://134.192.0.47:3000/` + - Public interface: `http://134.192.0.47:3001/` + - API: `http://134.192.0.47:4567/` + +## Running the tests + +### Backend tests + +_By default the tests are configured to run using a separate MySQL & Solr from the +development servers. This means that the development and test environments will not +interfere with each other._ + +```bash +# run the backend / api tests +./build/run backend:test +``` + +You can also run a single spec file with: + +```bash +./build/run backend:test -Dspec="myfile_spec.rb" +``` + +Or a single example with: + +```bash +./build/run backend:test -Dexample="does something important" +``` + +Or by file line with: + +```bash +./build/run backend:test -Dspec="myfile_spec.rb:123" +``` + +There are specific instructions and requirements for the [UI tests](/development/ui_test) to work. + +**Advanced: tests and the build directory** + +Running the tests may create directories in `./build/test`. These will be +the same as for the development servers as described above. + +## Coverage reports + +You can run the coverage reports using: + + ./build/run coverage + +This runs all of the above tests in coverage mode and, when the run +finishes, produces a set of HTML reports within the `coverage` +directory in your ArchivesSpace project directory. + +## Linting and formatting with Rubocop + +If you are editing or adding source files that you intend to contribute via a pull request, +you should make sure your changes conform to the layout and style rules by running: + + ./build/run rubocop + +Most errors can be auto-corrected by running: + + ./build/run rubocop -Dcorrect=true + +## Submitting a Pull Request + +When you have code ready to be reviewed, open a pull request to ask for it to be +merged into the codebase. + +To help make the review go smoothly, here are some general guidelines: + +- **Your pull request should address a single issue.** + It's better to split large or complicated PRs into discrete steps if possible. This + makes review more manageable and reduces the risk of conflicts with other changes. +- **Give your pull request a brief title, referencing any JIRA or Github issues resolved + by the pull request.** + Including JIRA numbers (e.g. 'ANW-123') explicitly in your pull request title ensures the + PR will be linked to the original issue in JIRA. Similarly, referencing GitHub issue numbers + (e.g. 'Fixes #123') will automatically close that issue when the PR is merged. +- **Fill out as much of the Pull Request template as is possible/relevant.** + This makes it easier to understand the full context of your PR, including any discussions or supporting documentation that went into developing the functionality or resolving the bug. + +## Building a distribution + +See: [Building an Archivesspace Release](/development/release) for information on building a distribution. + +## Generating API documentation + +See: [Building an Archivesspace Release](/development/release) for information on building the documentation. diff --git a/src/content/docs/de/development/docker.md b/src/content/docs/de/development/docker.md new file mode 100644 index 0000000..8168231 --- /dev/null +++ b/src/content/docs/de/development/docker.md @@ -0,0 +1,42 @@ +--- +title: Docker +description: A guide to using the Docker configuration with ArchivesSpace. +--- + +The [Docker](https://www.docker.com/) configuration is used to create [automated builds](https://hub.docker.com/r/archivesspace/archivesspace/) on Docker Hub, which are deployed to [the latest version](http://test.archivesspace.org) when the build completes. + +## Custom builds + +Run ArchivesSpace with MySQL, external Solr and a Web Proxy. Switch to the +branch you want to build: + +```bash +#if you already have running containers and want to clear them out +docker-compose stop +docker-compose rm + +#build the local image +docker-compose build # needed whenever the branch is changed and ready to test +docker-compose up + +#running specific containers +docker-compose up -d db solr # in background +docker-compose up app web # in foreground +>to access running container +docker exec -it archivesspace_app_1 bash +``` + +## Sharing an image + +To share the build image the easiest way is to create an account on [Docker Hub](https://hub.docker.com/). Next retag the image and push to the hub account: + +```bash +DOCKER_ID_USER=example +TAG=awesome-updates +docker tag archivesspace_app:latest $DOCKER_ID_USER/archivesspace:$TAG +docker push $DOCKER_ID_USER/archivesspace:$TAG +``` + +To download the image: `docker pull example/archivesspace:awesome-updates`. + +--- diff --git a/src/content/docs/de/development/e2e_tests.md b/src/content/docs/de/development/e2e_tests.md new file mode 100644 index 0000000..2a78b10 --- /dev/null +++ b/src/content/docs/de/development/e2e_tests.md @@ -0,0 +1,152 @@ +--- +title: ArchivesSpace End-to-End Test Suite +description: Instructions on running the end-to-end test suite. +--- + +For more context on the [End-to-End test suite](https://github.com/archivesspace/archivesspace/tree/master/e2e-tests) and how to contribute tests, see our [wiki-page](https://archivesspace.atlassian.net/wiki/spaces/ADC/pages/4606590990/How+to+contribute+End+to+End+test+scenarios). + +## Recommended setup + +### Using a version manager + +The required Ruby version for the e2e test application is documented in `[./.ruby-version](./.ruby-version)`. + +It is strongly recommended to use a version manager (such as [mise](https://mise.jdx.dev/)) to be able to switch to any version that a given project requires. + +#### mise + +We recommend using [mise](https://mise.jdx.dev/) to manage Ruby (and other runtimes). Installation instructions are available at [Getting started](https://mise.jdx.dev/getting-started.html). + +#### Alternatives to `mise` + +If you wish to use a different Ruby manager or installation method, see [Ruby's installation documentation](https://www.ruby-lang.org/en/documentation/installation/). + +### Installation + +From the ArchivesSpace root directory, navigate to the e2e test application, then install Ruby, Bundler, and the application dependencies: + +```sh +# 1. Navigate to e2e-tests directory +cd e2e-tests + +# 2. Install Ruby at the version specified in ./.tool-versions +mise install + +# 3. Install the Bundler dependency manager +gem install bundler + +# 4. Install project dependencies +bundle install +``` + +## Running the tests locally + +### Just working on the e2e tests with Docker + +If you are just working on e2e tests and not touching the ArchivesSpace application, you can run e2e tests locally against the latest ArchivesSpace `master` branch build using Docker. + +#### Install Docker Desktop + +[Docker Desktop](https://www.docker.com/get-started/) is a one-click-install application for Linux, Mac, and Windows. It provides both terminal and GUI access to Docker. Download and install the appropriate version for your operating system from the link above. You can also use alternative software for running Docker containers, such as [OrbStack](https://orbstack.dev/) for macOS. + +#### Run the latest ArchivesSpace Docker image + +```sh +# Get the latest ArchivesSpace `master` branch build +docker compose pull + +# Start ArchivesSpace servers +docker compose up +``` + +Verify the servers are running by opening [http://localhost:8080](http://localhost:8080) in a browser. + +### Working with an ArchivesSpace development environment + +You can run the e2e test suite against your local ArchivesSpace development environment. But be aware that your database, Solr index, and any configuration changes will need to be reset. + +#### Reset your database and Solr index + +Make sure your ArchivesSpace instance has a [blank database](https://docs.archivesspace.org/development/dev/#loading-data-fixtures-into-dev-database) and [blank solr index](https://docs.archivesspace.org/development/dev/#clear-out-existing-solr-state). + +#### Restore default configuration options (except for `AppConfig[:db_url]`) + +Make sure you override any local changes to the default configuration options (via ../common/config/config.rb) by commenting them out or deleting them, except for `AppConfig[:db_url]` (which is required for using the MySQL database). + +#### Run the frontend dev server + +Start the `frontend:devserver` as described [here](https://docs.archivesspace.org/development/dev/#run-the-development-servers). Verify it is running by opening [http://localhost:3000/](http://localhost:3000/) in your browser. + +#### Run the public dev server + +Start the `public:devserver` as described [here](https://docs.archivesspace.org/development/dev/#run-the-development-servers). Verify it is running by opening [http://localhost:3001/](http://localhost:3001/) in your browser. + +#### Set the `STAFF_URL` environment variable + +Set your `STAFF_URL` environment variable to point the e2e tests at the local development server: + +```sh +export STAFF_URL='http://localhost:3000' +``` + +#### Set the `PUBLIC_URL` environment variable + +Set your `PUBLIC_URL` environment variable to point the e2e tests at the local public interface: + +```sh +export PUBLIC_URL='http://localhost:3001' +``` + +## Running tests + +After setting the appropriate `STAFF_URL` and `PUBLIC_URL` environment variables as described above, run the desired test(s) according to the following commands. + +### All test files at once + +```sh +bundle exec cucumber staff_features/ +``` + +### All scenarios in a specific file + +```sh +bundle exec cucumber staff_features/assessments/assessment_create.feature +``` + +### A specific scenario in a specific file + +```sh +bundle exec cucumber staff_features/assessments/assessment_create.feature --name 'Assessment is created' +``` + +## Debugging + +Add a `byebug` statement in any `.rb` file to set a breakpoint and start a debugging session in the console while running. See more [here](https://github.com/deivid-rodriguez/byebug). Don't forget to remove any `byebug` statements before a `git push`... + +If you need to see the browser while running the test scenario and debugging, add a `HEADLESS=''` argument, as in: + +```sh +bundle exec cucumber HEADLESS='' staff_features/ +``` + +## Linters + +This test suite uses two linters, [`cuke_linter`](https://github.com/enkessler/cuke_linter) and [`rubocop`](https://rubocop.org/), to maintain code quality. + +```sh +# Lints Cucumber .feature files +bundle exec cuke_linter + +# Lints Ruby .rb files +bundle exec rubocop +``` + +## Editor integration (optional) + +ArchivesSpace provides optional VS Code workspace tasks that can run the end-to-end test suite without manually setting environment variables or changing directories. + +These tasks execute the same cucumber commands described above and are simply a convenience wrapper around the documented command-line workflow. + +Setup instructions are documented in the **VS Code guide** [here](https://docs.archivesspace.org/development/vscode/). + +Contributors not using VS Code can ignore this section and run the tests directly from the command line. diff --git a/src/content/docs/de/development/ead-exporter.md b/src/content/docs/de/development/ead-exporter.md new file mode 100644 index 0000000..55cc9cb --- /dev/null +++ b/src/content/docs/de/development/ead-exporter.md @@ -0,0 +1,31 @@ +--- +title: Repository EAD Exporter +description: A guide to export all published resources' EAD within a specified repository into a single zip archive. +--- + +Exports all published resource record EAD XML files associated with a single +repository into a zip archive. This zip file will be saved in the ArchivesSpace +data directory (as defined in `config.rb`) and include the repository id in the +filename. + +## Usage + +```sh +./scripts/ead_export.sh user password repository_id +``` + +A best practice would be to put the password in a hidden file such as: + +```sh +touch ~/.aspace_password +chmod 0600 ~/.aspace_password +vi ~/.aspace_password # enter your password +``` + +Then call the script like: + +```sh +./scripts/ead_export.sh user $(cat /home/user/.aspace_password) repository_id +``` + +This way you avoid directly exposing it on the command line or in crontab etc. diff --git a/src/content/docs/de/development/index.md b/src/content/docs/de/development/index.md new file mode 100644 index 0000000..e0fdd9d --- /dev/null +++ b/src/content/docs/de/development/index.md @@ -0,0 +1,13 @@ +--- +title: Development +description: The index to the development section of the ArchivesSpace technical documentation. +--- + +- [Running a development version of ArchivesSpace](./dev.html) +- [Building an ArchivesSpace release](./release.html) +- [Docker](./docker.html) +- [DB versions listed by release](./release_schema_versions.html) +- [User Interface Test Suite](./ui_test.html) +- [Upgrading Rack for ArchivesSpace](./development/jruby-rack-build.html) +- [ArchivesSpace Releases](./releases.html) +- [Using the VS Code editor for local development](./vscode.html) diff --git a/src/content/docs/de/development/jruby-rack-build.md b/src/content/docs/de/development/jruby-rack-build.md new file mode 100644 index 0000000..9db3b5e --- /dev/null +++ b/src/content/docs/de/development/jruby-rack-build.md @@ -0,0 +1,96 @@ +--- +title: Upgrading Rack +description: A guide to upgrading Rack. +--- + +- Install local JRuby (match aspace version, currently: 9.2.12.0) and switch to it. +- Install Maven. +- Download jruby-rack. + +```shell +git checkout 1.1-stable +# install bundler version to match 1.1-stable Gemfile.lock +gem install bundler --version=1.14.6 +``` + +Should result in: + +``` +Fetching bundler-1.14.6.gem +Successfully installed bundler-1.14.6 +Parsing documentation for bundler-1.14.6 +Installing ri documentation for bundler-1.14.6 +Done installing documentation for bundler after 5 seconds +1 gem installed +``` + +Set environment to target rack version (the version being upgraded to): + +```shell +export RACK_VERSION=2.2.3 +bundle +``` + +Should result in: + +``` +Fetching gem metadata from https://rubygems.org/............. +Fetching version metadata from https://rubygems.org/.. +Resolving dependencies... +Installing rake 10.4.2 +Using bundler 1.14.6 +Using diff-lcs 1.2.5 +Installing jruby-openssl 0.9.21 (java) +Using rack 2.2.3 (was 1.6.8) +Using rspec-core 2.14.8 +Using rspec-mocks 2.14.6 +Using appraisal 0.5.2 +Using rspec-expectations 2.14.5 +Using rspec 2.14.1 +Bundle complete! 5 Gemfile dependencies, 10 gems now installed. +Use `bundle show [gemname]` to see where a bundled gem is installed. +``` + +This will have bumped the Rack version in Gemfile.lock: + +```diff +diff --git a/Gemfile.lock b/Gemfile.lock +index 493c667..f016925 100644 +--- a/Gemfile.lock ++++ b/Gemfile.lock +@@ -6,7 +6,7 @@ GEM + rake + diff-lcs (1.2.5) + jruby-openssl (0.9.21-java) +- rack (1.6.8) ++ rack (2.2.3) + rake (10.4.2) + rspec (2.14.1) + rspec-core (~> 2.14.0) +@@ -23,7 +23,7 @@ PLATFORMS + DEPENDENCIES + appraisal + jruby-openssl (~> 0.9.20) +- rack (~> 1.6.8) ++ rack (= 2.2.3) + rake (~> 10.4.2) + rspec (~> 2.14.1) +``` + +Build the jruby-rack jar: + +```bash +bundle exec jruby -S rake clean gem SKIP_SPECS=true +``` + +This creates `target/jruby-rack-1.1.21.jar` with Rack 2.2.3. + +Upload the jar to the public s3 bucket, specifying the rack version: + +```bash +aws s3 cp target/jruby-rack-1.1.21.jar \ + s3://as-public-shared-files/jruby-rack-1.1.21_rack-2.2.3.jar \ + --profile archivesspace +``` + +Finally, update `rack_version` in the aspace `build.xml` file. diff --git a/src/content/docs/de/development/release.md b/src/content/docs/de/development/release.md new file mode 100644 index 0000000..b157437 --- /dev/null +++ b/src/content/docs/de/development/release.md @@ -0,0 +1,263 @@ +--- +title: Building a release +description: How to build an ArchivesSpace release. +--- + +- [Pre-release steps](#pre-release-steps) +- [Build the docs](#build-and-publish-the-api-and-yard-docs) +- [Build the release](#building-a-release-yourself) +- [Post the release with release notes](#create-the-release-with-notes) +- [Post-release updates](#post-release-updates) + +## Clone the git repository + +When building a release it is important to start from a clean repository. The +safest way of ensuring this is to clone the repo: + +```shell +git clone https://github.com/archivesspace/archivesspace.git +``` + +## Checkout the release branch and create release tag + +If you are building a major or minor version (see [https://semver.org](https://semver.org)), +start by creating a branch for the release and all future patch releases: + +```shell +git checkout -b release-v1.0.x +git tag v1.0.0 +``` + +If you are building a patch version, just check out the existing branch and see below: + +```shell +git checkout release-v1.0.x +``` + +Patch versions typically arise because a regression or critical bug has arisen since +the last major or minor release. We try to ensure that the "hotfix" is merged into both +master and the release branch without the need to cherry-pick commits from one branch to +the other. The reason is that cherry-picking creates a new commit (with a new commit id) +that contains identical changes, which is not optimal for the repository history. + +It is therefore preferable to start from the release branch when creating a "hotfix" +that needs to be merged into both the release branch and master. The Pull Request should +then be based on the release branch. After that Pull Request has been through Code review, +QA and merged, a second Pull Request should be created to merge the updated release branch +to master. + +Consider the following scenario. The current production release is v1.0.0 and a critical +bug has been discovered. In the time since v1.0.0 was released, new features have been +added to the master branch, intended for release in v1.1.0: + +```shell +git checkout -b oh-no-some-migration-corrupts-some-data origin/release-v1.0.0 +( fixes problem ) +git commit -m "fix bad migration and add a migration to repair corrupted data" +gh pr create -B release-v1.0.x --web +( PR is reviewed and merged to the release branch) +git checkout release-v1.0.x +git pull +git tag v1.0.1 +gh pr create -B master --web +( PR is reviewed and merged to the master branch) +``` + +## Pre-release steps + +### Run the ArchivesSpace rake tasks to check for issues + +Before proceeding further, it’s a good idea to check that there aren’t missing +translations or multiple gem versions. + +1. Bootstrap your current development environment on the latest master branch + by downloading all dependencies--JRuby, Gems, Solr, etc. + + ```shell + build/run bootstrap + ``` + +2. Run the following checks (recommended): + + ```shell + build/run rake -Dtask=check:multiple_gem_versions + ``` + +3. If multiple gem versions are reported, that should be addressed prior to moving on. + +## Build and publish the API and Yard Docs + +API docs are built using the submodule in `docs/slate` and Docker. +YARD docs are built using the YARD gem. At this time, they cover a small +percentage of the code and are not especially useful. + +### Build the API docs + +1. API documentation depends on the [archivesspace/slate](https://github.com/archivesspace/slate) submodule + and on Docker. Slate cannot run on JRuby. + + ```shell + git submodule init + git submodule update + ``` + +2. Run the `doc:api` task to generate Slate API and Yard documentation. (Note: the + API generation requires a DB connection with standard enumeration values.) + + ```shell + ARCHIVESSPACE_VERSION=X.Y.Z APPCONFIG_DB_URL=$APPCONFIG_DB_URL build/run doc:api + ``` + + This generates `docs/slate/source/index.html.md` (Slate source document). + +3. (Optional) Run a docker container to preview API docs. + + ```shell + docker-compose -f docker-compose-docs.yml up + ``` + + Visit `http://localhost:4568` to preview the api docs. + +4. Build the static api files. The api markdown document should already be in `docs/slate/source` (step 2 above). + The api markdown will be rendered to html and moved to `docs/build/api`. + ```shell + docker run --rm --name slate -v $(pwd)/docs/build/api:/srv/slate/build -v $(pwd)/docs/slate/source:/srv/slate/source slatedocs/slate build + ``` + +### Build the YARD docs + +1. Build the YARD docs in the `docs/build/doc` directory: + + ```shell + ./build/run doc:yardoc + ``` + +### Commit built docs and push to Github pages + +1. Double check that you are on a release branch (we don't need this stuff in master). Commit the newly built documentation and push it in the `gh-pages` branch only: + + ```shell + git add docs/build + git commit -m "release-vx.y.z api and yard documentation" + ``` + + Use `git subtree` to push the documentation to the `gh-pages` branch: + + ```shell + git subtree push --prefix docs/build origin gh-pages + ``` + + Published documents should appear a short while later at: + [http://archivesspace.github.io/archivesspace/api](http://archivesspace.github.io/archivesspace/api) + [http://archivesspace.github.io/archivesspace/doc](http://archivesspace.github.io/archivesspace/doc) + + Note: if the push command fails you may need to delete `gh-pages` in the remote repo: + + ```shell + git push origin :gh-pages + ``` + + **Note:** do not push the docs/build directory to the release branch, as it is only meant to be maintained in the `gh-pages` branch. + +## Building a release yourself + +1. Building the actual release is very simple. Run the following: + + ```shell + ./scripts/build_release vX.X.X + ``` + + Replace X.X.X with the version number. This will build and package a release + in a zip file. + +## Building a release on Github + +1. There is no need to build the release yourself. Just push your tag to Github + and trigger the `release` workflow: + ```shell + git push vX.X.X + ``` + Replace X.X.X with the version number. The release will be created as a **draft**, it will not be automatically published. + +## Create the Release with Notes + +### Build the release notes + +**As of v3.4.0, it should no longer necessary to build release notes manually.** + +To manually generate release notes: + +Create a deployment token on your [github developer settings](https://github.com/settings/tokens). + +```shell +export GITHUB_TOKEN={YOUR DEPLOYMENT TOKEN ON GITHUB} +./build/run doc:release_notes -Dcurrent_tag=v3.4.0 -Doutfile=RELEASE_NOTES.md -Dtoken=$GITHUB_TOKEN +``` + +#### Edit Release Page As Neccessary + +If there are any special considerations add them to the release page manually. Special considerations +might include changes that will require 3rd party plugins to be updated or a +that a full reindex is required. + +Example content: + +```md +This release requires a **full reindex** of ArchivesSpace for all functionality to work +correctly. Please follow the [instructions for reindexing](/administration/indexes) +before starting ArchivesSpace with the new version. +``` + +## Post release updates + +After a release has been put out it's time for some maintenance before the next +cycle of development clicks into full gear. Consider the following, depending on +current team consensus: + +### Branches + +Delete merged and stale branches in Github as appropriate. + +### Milestones + +Close the just-released Milestone, adding a due date of today's date. Create a +new Milestone for the anticipated next release (this can be changed later if the +version numbering is changed for some reason). + +### Test Servers + +Review existing test servers, and request the removal of any that are no longer +needed (e.g. feature branches that have been merged for the release). + +### GitHub Issues + +Review existing opening GH issues and close any that have been resolved by +the new release (linking to a specific PR if possible). For the remaining open +issues, review if they are still a problem, apply labels, link to known JIRA +issues, and add comments as necessary/relevant. + +### Accessibility Scan + +Run accessibility scans for both the public and staff sites and file a ticket +for any new and ongoing accessibility errors. + +### PR Assignments + +Begin assigning queued PRs to members of the Core Committers group, making +sure to include the appropriate milestone for the anticipated next release. + +### Dependencies + +#### Gems + +Take a look at all the Gemfile.lock files ( in backend, frontend, public, +etc ) and review the gem versions. Pay close attention to the Rails & Friends +( ActiveSupport, ActionPack, etc ), Rack, and Sinatra versions and make sure +there have not been any security patch versions. There usually are, especially +since Rails sends fix updates rather frequently. + +To update the gems, update the version in Gemfile, delete the Gemfile.lock, and +run ./build/run bootstrap to download everything. Then make sure your test +suite passes. + +Once everything passes, commit your Gemfiles and Gemfile.lock files. diff --git a/src/content/docs/de/development/release_schema_versions.md b/src/content/docs/de/development/release_schema_versions.md new file mode 100644 index 0000000..42a75d1 --- /dev/null +++ b/src/content/docs/de/development/release_schema_versions.md @@ -0,0 +1,41 @@ +--- +title: Database versions by release +description: A list of ArchivesSpace releases and their corresponding database versions. +--- + +| Release | DB Version | +| ------- | ---------- | +| 1.1.0 | 33 | +| 1.1.1 | 35 | +| 1.1.2 | 35 | +| 1.2.0 | 38 | +| 1.3.0 | 56 | +| 1.4.0 | 59 | +| 1.4.1 | 59 | +| 1.4.2 | 59 | +| 1.5.0 | 74 | +| 1.5.1 | 74 | +| 1.5.2 | 75 | +| 1.5.3 | 75 | +| 1.5.4 | 75 | +| 2.0.0 | 84 | +| 2.0.1 | 84 | +| 2.1.0 | 92 | +| 2.1.1 | 92 | +| 2.1.2 | 92 | +| 2.2.0 | 93 | +| 2.2.1 | 94 | +| 2.2.2 | 95 | +| 2.3.0 | 97 | +| 2.3.1 | 97 | +| 2.3.2 | 97 | +| 2.4.0 | 100 | +| 2.4.1 | 100 | +| 2.5.0 | 102 | +| 2.5.1 | 102 | +| 2.5.2 | 108 | +| 2.6.0 | 120 | +| 2.7.0 | 126 | +| 2.7.1 | 129 | +| 2.8.0 | 135 | +| 2.8.1 | 138 | diff --git a/src/content/docs/de/development/releases.md b/src/content/docs/de/development/releases.md new file mode 100644 index 0000000..2b31a65 --- /dev/null +++ b/src/content/docs/de/development/releases.md @@ -0,0 +1,192 @@ +--- +title: Releases +description: A list of Archivesspace releases, their release dates, schema numbers, and links to the release on github. +--- + +3.4.0 May 24, 2023 +The schema number for this release is 172. +https://github.com/archivesspace/archivesspace/tree/v3.4.0 + +3.3.1 Oct 4, 2022 +The schema number for this release is 164 +https://github.com/archivesspace/archivesspace/tree/v3.3.1 + +3.2.0 February 4, 2022 +The schema number for this release is 159. +https://github.com/archivesspace/archivesspace/releases/download/v3.2.0/archivesspace-v3.2.0.zip + +3.1.1 Novemver 19, 2021 +The schema number for this release is 157. +https://github.com/archivesspace/archivesspace/releases/download/v3.1.0/archivesspace-v3.1.1.zip + +3.1.0 September 20, 2021 +The schema number for this release is 157. +https://github.com/archivesspace/archivesspace/releases/download/v3.1.0/archivesspace-v3.1.0.zip + +3.0.2 August 11, 2021 +The schema number for this release is 148. +https://github.com/archivesspace/archivesspace/releases/download/v3.0.2/archivesspace-v3.0.2.zip + +3.0.1 June 4, 2021 +The schema number for this release is 147. +https://github.com/archivesspace/archivesspace/releases/download/v3.0.1/archivesspace-v3.0.1.zip + +3.0.0 May 10, 2021 +The schema number for this release is 147. +[Bug in Release] + +2.8.1 Nov 11, 2020. +The schema number for this release is 138. +https://github.com/archivesspace/archivesspace/releases/download/v2.8.1/archivesspace-v2.8.1.zip + +2.8.0 Jul 16, 2020. +The schema number for this release is 135. +https://github.com/archivesspace/archivesspace/releases/download/v2.8.0/archivesspace-v2.8.0.zip + +2.7.1 Feb 14, 2020. +The schema number for this release is 129. +https://github.com/archivesspace/archivesspace/releases/download/v2.7.1/archivesspace-v2.7.1.zip + +2.7.0 Oct 9, 2019. +The schema number for this release is 126. +https://github.com/archivesspace/archivesspace/releases/download/v2.7.0/archivesspace-v2.7.0.zip + +2.6.0 May 30, 2019. +The schema number for this release is 120. +https://github.com/archivesspace/archivesspace/releases/download/v2.6.0/archivesspace-v2.6.0.zip + +2.5.2 Jan 15, 2019. +The schema number for this release is 108. +https://github.com/archivesspace/archivesspace/releases/download/v2.5.2/archivesspace-v2.5.2.zip + +2.5.1 Oct 17, 2018. +This release includes no new database migrations. +https://github.com/archivesspace/archivesspace/releases/download/v2.5.1/archivesspace-v2.5.1.zip + +2.5.0 Aug 10, 2018. +The schema number for this release is 102. +https://github.com/archivesspace/archivesspace/releases/download/v2.5.0/archivesspace-v2.5.0.zip + +2.4.1 Jun 22, 2018. +This release includes no new database migrations. +https://github.com/archivesspace/archivesspace/releases/download/v2.4.1/archivesspace-v2.4.1.zip + +2.4.0 Jun 7, 2018. +The schema number for this release is 100. +https://github.com/archivesspace/archivesspace/releases/download/v2.4.0/archivesspace-v2.4.0.zip + +2.3.2 Mar 27, 2018. +This release includes no new database migrations. +https://github.com/archivesspace/archivesspace/releases/download/v2.3.2/archivesspace-v2.3.2.zip + +2.3.1 Feb 28, 2018. +This release includes no new database migrations. +https://github.com/archivesspace/archivesspace/releases/download/v2.3.1/archivesspace-v2.3.1.zip + +2.3.0 Feb 5, 2018. +The schema number for this release is 97. +https://github.com/archivesspace/archivesspace/releases/download/v2.3.0/archivesspace-v2.3.0.zip + +2.2.2 Dec 13, 2017. +The schema number for this release is 95. +https://github.com/archivesspace/archivesspace/releases/download/v2.2.2/archivesspace-v2.2.2.zip + +2.2.0 Oct 12, 2017. +The schema number for this release is 93. +https://github.com/archivesspace/archivesspace/releases/download/v2.2.0/archivesspace-v2.2.0.zip + +2.1.2 Sep 1, 2017. +The schema number for this release is 92. +https://github.com/archivesspace/archivesspace/releases/download/v2.1.2/archivesspace-v2.1.2.zip + +2.1.1 Aug 16, 2017. +The schema number for this release is 92. +https://github.com/archivesspace/archivesspace/releases/download/v2.1.1/archivesspace-v2.1.1.zip + +2.1.0 Jul 18, 2017. +The schema number for this release is 92. +https://github.com/archivesspace/archivesspace/releases/download/v2.1.0/archivesspace-v2.1.0.zip + +2.0.1 May 2, 2017. +The schema number for this release is 84. +https://github.com/archivesspace/archivesspace/releases/download/v2.0.1/archivesspace-v2.0.1.zip + +2.0.0 Apr 18, 2017. +The schema number for this release is 84. +https://github.com/archivesspace/archivesspace/releases/download/v2.0.0/archivesspace-v2.0.0.zip + +1.5.4 Mar 16, 2017. +The schema number for this release is 75. +https://github.com/archivesspace/archivesspace/releases/download/v1.5.4/archivesspace-v1.5.4.zip + +1.5.3 Feb 15, 2017. +The schema number for this release is 75. +https://github.com/archivesspace/archivesspace/releases/download/v1.5.3/archivesspace-v1.5.3.zip + +1.5.2 Dec 8, 2016. +The schema number for this release is 75. +https://github.com/archivesspace/archivesspace/releases/download/v1.5.2/archivesspace-v1.5.2.zip + +1.5.1 Jul 29, 2016. +The schema number for this release is 74. +https://github.com/archivesspace/archivesspace/releases/download/v1.5.1/archivesspace-v1.5.1.zip + +1.5.0 Jul 20, 2016. +The schema number for this release is 74. +https://github.com/archivesspace/archivesspace/releases/download/v1.5.0/archivesspace-v1.5.0.zip + +1.4.2 Oct 27, 2015. +The schema number for this release is 59. +https://github.com/archivesspace/archivesspace/releases/download/v1.4.2/archivesspace-v1.4.2.zip + +1.4.1 Oct 13, 2015. +The schema number for this release is 59. +https://github.com/archivesspace/archivesspace/releases/download/v1.4.1/archivesspace-v1.4.1.zip + +1.4.0 Sep 29, 2015. +The schema number for this release is 59. +https://github.com/archivesspace/archivesspace/releases/download/v1.4.0/archivesspace-v1.4.0.zip + +1.3.0 Jun 30, 2015. +The schema number for this release is 56. +https://github.com/archivesspace/archivesspace/releases/download/v1.3.0/archivesspace-v1.3.0.zip + +1.2.0 Mar 30, 2015. +The schema number for this release is 38. +https://github.com/archivesspace/archivesspace/releases/download/v1.2.0/archivesspace-v1.2.0.zip + +1.1.2 Jan 21, 2015. +The schema number for this release is 35. +https://github.com/archivesspace/archivesspace/releases/download/v1.1.2/archivesspace-v1.1.2.zip + +1.1.1 Jan 6, 2015. +The schema number for this release is 35. +https://github.com/archivesspace/archivesspace/archive/refs/tags/v1.1.1.zip (only source available) + +1.1.0 Oct 20, 2014. +The schema number for this release is 33. +https://github.com/archivesspace/archivesspace/releases/download/v1.1.0/archivesspace-v1.1.0.zip + +1.0.9 May 13, 2014. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.9/archivesspace-v1.0.9.zip + +1.0.7.1 March 7, 2014. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.7.1/archivesspace-v1.0.7.1.zip + +1.0.4 Jan 14, 2014. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.4/archivesspace-v1.0.4.zip + +1.0.2 Nov 26, 2013. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.2/archivesspace-v1.0.2.zip + +1.0.1 Nov 1, 2013. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.1/archivesspace-v1.0.1.zip + +1.0.0 Oct 4, 2013. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.0/archivesspace-v1.0.0.zip diff --git a/src/content/docs/de/development/ui_test.md b/src/content/docs/de/development/ui_test.md new file mode 100644 index 0000000..c64d6a6 --- /dev/null +++ b/src/content/docs/de/development/ui_test.md @@ -0,0 +1,140 @@ +--- +title: UI tests +description: Instructions on running automated browser tests with Selenium on the ArchivesSpace UI on both Firefox and Chrome. +--- + +ArchivesSpace's staff and public interfaces use [Selenium](http://docs.seleniumhq.org/) to run automated browser tests. These tests can be run using [Firefox via geckodriver](https://firefox-source-docs.mozilla.org/testing/geckodriver/geckodriver/index.html) and [Chrome](https://sites.google.com/a/chromium.org/chromedriver/home) (either regular Chrome or headless). + +## UI tests with firefox (default) + +Firefox is the default used in our [CI workflows](https://github.com/archivesspace/archivesspace/actions). + +On Ubuntu Linux 22.04 or later, the included Firefox deb package is a transition package that actually installs Firefox through [snap](https://snapcraft.io/). Snap has security restrictions that do not work with automated testing without additional configuration. + +To uninstall the Firefox snap package and reinstall it as a traditional deb package on Ubuntu Linux use: + +```bash +# remove old snap firefox package (if installed) +sudo snap remove firefox + +# create a keyring directory (if not existing) +sudo install -d -m 0755 /etc/apt/keyrings + +# download mozilla key and add it to the keyring +wget -q https://packages.mozilla.org/apt/repo-signing-key.gpg -O- | sudo tee /etc/apt/keyrings/packages.mozilla.org.asc > /dev/null + +# set high priority for the mozilla pakcages +echo "deb [signed-by=/etc/apt/keyrings/packages.mozilla.org.asc] https://packages.mozilla.org/apt mozilla main" | sudo tee -a /etc/apt/sources.list.d/mozilla.list > /dev/null +echo ' +Package: * +Pin: origin packages.mozilla.org +Pin-Priority: 1000 +' | sudo tee /etc/apt/preferences.d/mozilla + +# install firefox +sudo apt update && sudo apt install firefox +``` + +When using firefox, you need to make sure that the version of geckodriver you are using works with your firefox version, see this [compatibility table](https://firefox-source-docs.mozilla.org/testing/geckodriver/Support.html). Get your installed firefox version by running: `firefox --version`. + +On Linux, you can download the geckodriver version that corresponds to your firefox version [here](https://github.com/mozilla/geckodriver/releases). + +On Mac you can use: `brew install geckodriver`. + +## UI tests with Chrome + +To run using Chrome, you must first download the appropriate [ChromeDriver +executable](https://sites.google.com/a/chromium.org/chromedriver/downloads) +and place it somewhere in your OS system path. Mac users with Homebrew may accomplish this via `brew cask install chromedriver`. + +**Please note, you must have either Firefox or Chrome installed on your system to +run these tests. Consult the [Firefox WebDriver](https://developer.mozilla.org/en-US/docs/Web/WebDriver) +or [ChromeDriver](https://sites.google.com/a/chromium.org/chromedriver/home) +documentation to ensure your Selenium, driver, browser, and OS versions all match +and support each other.** + +## Before running: + +Run the bootstrap build task to configure JRuby and all required dependencies: + +```bash +$ cd .. +$ build/run bootstrap +``` + +Note: all example code assumes you are running from your ArchivesSpace project directory. + +## Running the tests: + +```bash +#Frontend tests +./build/run frontend:selenium # Firefox, headless +FIREFOX_OPTS= ./build/run frontend:selenium # Firefox, no-opts = heady + +SELENIUM_CHROME=true ./build/run frontend:selenium # Chrome, headless +SELENIUM_CHROME=true CHROME_OPTS= ./build/run frontend:selenium # Chrome, no-opts = heady + +#Public tests +./build/run public:test # Firefox, headless +FIREFOX_OPTS= ./build/run public:test # Firefox, no-opts = heady + +SELENIUM_CHROME=true ./build/run public:test # Chrome, headless +SELENIUM_CHROME=true CHROME_OPTS= ./build/run public:test # Chrome, no-opts = heady +``` + +Tests can be scoped to specific files or groups: + +```bash +./build/run .. -Dspec='path/to/spec/from/spec/directory' # single file +./build/run .. -Dexample='[description from it block]' # specific block + +#EXAMPLES +./build/run frontend:selenium -Dexample='Repository model' +FIREFOX_OPTS= ./build/run frontend:selenium -Dexample='Repository model'# Firefox, heady + +./build/run public:test -Dspec='features/accessibility_spec.rb' +SELENIUM_CHROME=true CHROME_OPTS= ./build/run public:test -Dspec='features/accessibility_spec.rb' # Chrome, heady +``` + +Test require a backend and a frontend service to be running. To ovoid the overhead of starting and stopping them while developing, you can run tests against a dev backend: + +```bash +# start mysql and solr containers: +docker-compose -f docker-compose-dev.yml up + +# start services: + supervisord -c supervisord/archivesspace.conf + +# run a spec using the started backend: +ASPACE_TEST_BACKEND_URL='http://localhost:4567' ./build/run frontend:test -Dpattern="./features/events_spec.rb" + +# run all examples that contain "can spawn" in their description: +./build/run frontend:test -Dpattern="./features/accessions_spec.rb" -Dexample="can spawn" +``` + +Note, however, that some tests are dependent on a sequence of ordered steps and may not always run cleanly in isolation. In this case, more than the example provided may be run, and/or unexpected fails may result. + +### Saved pages on spec failures + +When frontend specs fail, a screenshot and an html page is saved for each failed example under `frontend/tmp/capybara`. On the CI, a zip file will be available for each failed CI job run under Summary -> Artifacts. In order to load the assets (and not see plain html) when viewing the saved html pages, a dev server should be running locally on port 3000, see [Running a development version of ArchivesSpace](/development/dev). + +### Keeping the test database up to date + +When calling `./build/run frontend:test` to run frontend specs, the following steps happen before the actual specs run: + +- All tables of the test database are dropped: `./build/run db:nuke:test` +- `frontend/spec/fixtures/archivesspace-test.sql` is loaded to the test database: `./build/run db:load:test` +- Any not-yet-applied migrations are run: `./build/run db:migrate:test` + +#### Updating the test database dump + +If any migrations are being applied whenever you run one or all frontend specs, it means that the test database dump `frontend/spec/fixtures/archivesspace-test.sql` has stayed behind. A new test database dump can be created by running: + +```bash +./build/run db:nuke:test +./build/run db:load:test +./build/run db:migrate:test +./build/run db:dump:test +``` + +An updated `frontend/spec/fixtures/archivesspace-test.sql` will be created that can be committed and pushed to a Pull Request. diff --git a/src/content/docs/de/development/vscode.md b/src/content/docs/de/development/vscode.md new file mode 100644 index 0000000..729f336 --- /dev/null +++ b/src/content/docs/de/development/vscode.md @@ -0,0 +1,70 @@ +--- +title: Using the VS Code editor +description: Instructions for using the VS Code editor with ArchiveSpace, including prerequisites and setup. +--- + +ArchivesSpace provides a [VS Code settings file](https://github.com/archivesspace/archivesspace/blob/master/.vscode/settings.json) that makes it easy for contributors using VS Code to follow the code style of the project and work with the end-to-end tests. Using this tool chain in your editor helps fix code format and lint errors _before_ committing files or running tests. In many cases such errors will be fixed automatically when the file being worked on is saved. Errors that can't be fixed automatically will be highlighted with squiggly lines. Hovering your cursor over these lines will display a description of the error to help reach a solution. + +## Prerequisites + +1. [Node.js](https://nodejs.org) +2. [Ruby](https://www.ruby-lang.org/) +3. [VS Code](https://code.visualstudio.com/) + +## Set up VS Code + +### Add system dependencies + +1. [ESLint](https://eslint.org/) +2. [Prettier](https://prettier.io/) +3. [Rubocop](https://rubocop.org/) +4. [Stylelint](https://stylelint.io/) + +#### Rubocop + +```bash +gem install rubocop +``` + +See https://docs.rubocop.org/rubocop/installation.html for further information, including using Bundler. + +#### ESLint, Prettier, Stylelint + +Run the following command from the ArchivesSpace root directory. + +```bash +npm install +``` + +See [package.json](https://github.com/archivesspace/archivesspace/blob/master/package.json) for further details on how these tools are used in ArchivesSpace. + +### Add VS Code extensions + +Add the following extensions via the VS Code command palette or the Extensions panel. (See this [documentation for installing and managing extensions](https://code.visualstudio.com/learn/get-started/extensions)). + +1. [ESLint](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) (dbaeumer.vscode-eslint) +2. [Prettier](https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode) (esbenp.prettier-vscode) +3. [Ruby Rubocop Revised](https://marketplace.visualstudio.com/items?itemName=LoranKloeze.ruby-rubocop-revived) (LoranKloeze.ruby-rubocop-revived) +4. [Stylelint](https://marketplace.visualstudio.com/items?itemName=stylelint.vscode-stylelint) (stylelint.vscode-stylelint) + +Optional — for enhancing work with the end-to-end tests: + +5. [Cucumber](https://marketplace.visualstudio.com/items?itemName=CucumberOpen.cucumber-official) (CucumberOpen.cucumber-official) — see [End-to-end test integration](#end-to-end-test-integration), especially step-definition navigation. + +It's important to note that since these extensions work in tandem with the [VS Code settings file](https://github.com/archivesspace/archivesspace/blob/master/.vscode/settings.json), these settings only impact your ArchivesSpace VS Code Workspace, not your global VS Code User settings. + +The extensions should now work out of the box at this point providing error messages and autocorrecting fixable errors on file save! + +### End-to-end test integration + +The ArchivesSpace repository includes optional VS Code workspace configuration that integrates the Cucumber end-to-end test suite with the editor. The files [`.vscode/example.tasks.json`](https://github.com/archivesspace/archivesspace/blob/master/.vscode/example.tasks.json) and [`.vscode/example.settings.json`](https://github.com/archivesspace/archivesspace/blob/master/.vscode/example.settings.json) are not enabled by default, so they do not override your personal editor configuration. + +**Enable the tasks** + +Copy the example tasks file to `.vscode/tasks.json`. This adds a task that runs the e2e test suite with the correct working directory, Ruby environment, and environment variables. Run it via **Terminal → Run Task… → Cucumber: Run e2e-test** (the same command as in the [e2e test documentation](/development/e2e_tests)). You may optionally supply a feature file path, `file.feature:line`. + +**Step-definition navigation** + +Integrate the contents of `example.settings.json` into your existing `.vscode/settings.json` (do not replace the existing file, but merge the Cucumber-related settings if you desire to use them so your current workspace settings are preserved). + +This configures the Cucumber extension for `e2e-tests/**/*.feature` and shared Ruby step definitions, enabling jump-to-definition, undefined-step detection, and discovery of shared steps. This simplifies contributing new end-to-end tests. diff --git a/src/content/docs/de/index.mdx b/src/content/docs/de/index.mdx new file mode 100644 index 0000000..3d6ec85 --- /dev/null +++ b/src/content/docs/de/index.mdx @@ -0,0 +1,14 @@ +--- +title: ArchivesSpace Technical Documentation +description: Technical documentation for ArchivesSpace, the open source archives management tool. +tableOfContents: false +editUrl: false +issueUrl: false +lastUpdated: false +prev: false +next: false +--- + +import Homepage from '@components/HomePage.astro' + +<Homepage /> diff --git a/src/content/docs/de/migrations/migrate_from_archivists_toolkit.md b/src/content/docs/de/migrations/migrate_from_archivists_toolkit.md new file mode 100644 index 0000000..c45195b --- /dev/null +++ b/src/content/docs/de/migrations/migrate_from_archivists_toolkit.md @@ -0,0 +1,126 @@ +--- +title: Migrating from Archivists' Toolkit +description: Guidelines are for migrating data from Archivists' Toolkit 2.0 Update 16 to all ArchivesSpace 2.1.x or 2.2.x releases using the migration tool provided by ArchivesSpace. +--- + +These guidelines are for migrating data from Archivists' Toolkit 2.0 Update 16 to all ArchivesSpace 2.1.x or 2.2.x releases using the migration tool provided by ArchivesSpace. Migrations of data from earlier versions of the Archivists' Toolkit (AT) or other versions of ArchivesSpace are not supported by these guidelines or migration tool. + +> Note: A migration from Archivists' Toolkit to ArchivesSpace should not be run against an active production database. + +## Preparing for migration + +- Make a copy of the AT instance, including the database, to be migrated and use it as the source of the migration. It is strongly recommended that you not use your AT production instance and database as the source of the migration for the simple reason of protecting the production version from any anomalies that might occur during the migration process. +- Review your source database for the quality of the data. Look for invalid records, duplicate name and subject records, and duplicate controlled values. Irregular data will either be carried forward to the ArchivesSpace instance or, in some cases, block the migration process. +- Select a representative sample of accession, resource, and digital object records to be examined closely when the migration is completed. Make sure to represent in the sample both the simplest and most complicated or extensive records in the overall data collection. + +### Notes + +- An AT subject record will be set to type 'topical' if it does not have a valid AT type statement or its type is not one of the types in ArchivesSpace. Several other AT LookupList values are not present in ArchivesSpace. These LookupList values cannot be added during the AT migration process and will therefore need to be changed in AT prior to migration. For full details on enum (controlled value list) mappings see the data map. You can use the AT Lookup List tool to change values that will not map correctly, as specified by the data map. +- Record audit information (created by, date created, modified by, and date modified) will not migrate from AT to ArchivesSpace. ArchivesSpace will assign new audit data to each record as it is imported into ArchivesSpace. The exception to this is that the username of the user who creates an accession record will be migrated to the accession general note field. +- Implement an ArchivesSpace production version including the setting up of a MySQL database to migrate into. Instructions are included at [Getting Started with ArchivesSpace](/administration/getting_started) and [Running ArchivesSpace against MySQL](/provisioning/mysql). + +## Preparing for Migrating AT Data + +- The migration process is iterative in nature. A migration report is generated at the end of each migration routine. The report indicates errors or issues occurring with the migration. (An example of an AT migration report is provided at the end of this document.) You should use this report to determine if any problems observed in the migration results are best remedied in the source data or in the migrated data in the ArchivesSpace instance. If you address the problems in the source data, then you can simply conduct the migration again. +- However, once you accept the migration and address problems in the migrated data, you cannot migrate the source data again without establishing a new target ArchivesSpace instance. Migrating data to a previously migrated ArchivesSpace database may result in a great many duplicate record error messages and may cause unrecoverable damage to the ArchivesSpace database. +- Please note, data migration can be a very memory and time intensive task due to the large number of records being transferred. As such, we recommend running the AT migration on a computer with at least 2GB of available memory. +- Make sure your ArchivesSpace MySQL database is setup correctly, following the documentation in the ArchivesSpace README file. When creating a MySQL database, you MUST set the default character encoding for the database to be UTF8. This is particularly important if you use a MySQL client, such as Navicat, MySQL Workbench, phpMyAdmin, etc., to create the database. See [Running ArchivesSpace against MySQL](/provisioning/mysql) for more details. +- Increase the maximum Java heap space if you are experiencing time out events. To do so: + - Stop the current ArchivesSpace instance + - Open in a text editor the file "archivesspace.sh" (Linux / Mac OSX) or archivesspace.bat (Windows). The file is located in the ArchivesSpace installation directory. + - Find the text string "-Xmx512m" and change it to "-Xmx1024m". + - Save the file. + - Restart the ArchivesSpace instance. + - Restart the AT migration process. + +## Running the Migration Tool as an AT Plugin + +- Make sure that the AT instance you want to migrate from is shut down. Next, download the "scriptAT.zip" file from the at-migration release github page (https://github.com/archivesspace/at-migration/releases) and copy the file into the plugins folder of the AT instance, overwriting the one that's already there if needed. +- Make sure the ArchivesSpace instance that you are migrating into is up and running. +- Restart the AT instance to load the newly installed plug-in. To run the plug-in go to the "Tools" menu, then select "Script Runtime v1.0", and finally "ArchivesSpace Data Migrator". This will cause the plug-in window to display. + +![AT migrator](../../../../images/at_migrator.jpg) + +- Change the default information in the Migrator UI: + - **Threads** – Used to specify the number of clients that are used to copy Resource records simultaneously. The limit on the number of clients depends on the record size and allocated memory. A number from 4 to 6 is generally a good value to use, but can be reduced if an "Out of Memory Exception" occurs. + - **Host** – The URL and port number of the ArchivesSpace backend server + - **"Copy records when done" checkbox** – Used to specify that the records should + be copied once the repository check has completed. + - **Password** – password for the ArchivesSpace "admin" account. The default value + of "admin" should work unless it was changed by the ArchivesSpace + administrator. + - **Reset Password** – Each user account transferred has its password reset to this. + Please note that users need to change their password when they first log-in + unless LDAP is used for authentication. + - **"Specify Type of Extent Data" Radio button** – If you are using the BYU Plugin, + select that option. Otherwise, leave as the default – Normal or Harvard Plugin. + - **Specify Unlinked Records to NOT Copy checkboxes** – If you have name or + subject records that are not linked to accessions, resources, or digital objects, + you can choose not to migrate those to ArchivesSpace. + - **"Records to Publish?" checkboxes** – Used to specify what types of records + should be published after they are migrated to ArchivesSpace. + - **Text box showing -refid_unique, -term_default** – This is needed for the + functioning of the migration tool. Please do not make changes to this area. + - **Output Console** – Display section for following the migration while it is running + - **View Error Log** – Used to view a printout of all the errors encountered during the + migration process. This can be used while the migration process is underway as well. +- Once you have made the appropriate changes to the UI, there are three buttons to choose from to start the migration process. + - **Copy to ArchivesSpace** – This starts the migration to the ArchivesSpace instance + you have made the appropriate changes to the UI, there are three buttons to + indicated by the Host URL. + - **Run Repository Check** – The repository check searches for, and attempts to fix repository misalignment between Resources and linked Accession/Digital Object records. The fix applied entails copying the linked accession/digital object record to the repository of the resource record in the ArchivesSpace database (those record positions are not modified in the AT database). + + As long as accession records are not linked to multiple Resource records in different repositories, the fix will be valid. Otherwise, you will receive a warning message. For such cases, the Resource and Accession record(s) will still be migrated, but without links to one another; those links will need to be re-established in ArchivesSpace. + + This misalignment problem involves only accession and resource records and not digital object records, as accession and resource records have a many-to-many relationship. Assessments also can have a many-to-many relationship with resources, accessions, and digital objects. However, since assessments are small and quick to copy, they will simply be copied to as many repositories as needed to establish all the appropriate links. + + If the "Copy Records When Done" checkbox is selected, the records will be migrated to the ArchivesSpace instance once the check is completed. + + - **Continue Previous Migration** – If the migration process fails, this is used to skip to the place the failed previous migration left off. This should allow the migration process of resource records to be gracefully restarted without having to clean out the ArchivesSpace backend database and start from scratch. + +- For most part, the data migration process should be automatic, with an error log being generated when completed. However, depending on the particular data, various errors may occur that would require the migration to be re-run after they have been resolved by the user. The time a migration takes to complete will depend on a number of factors (database size, network performance etc.), but can be anywhere from a couple of hours to a few days. +- Data from the following AT modules will migrate: + - Lookup Lists + - Repositories + - Locations + - Users + - Subjects + - Names + - Accessions + - Digital Object and Digital Object Components + - Resources and Resource Components + - Assessments +- Data + - Reports from the following AT modules will not migrate + > INFORMATION MISSING FROM SOURCE DOCUMENT - NEEDS REVIEW!!! + +## Assessing the Migration and Cleaning Up Data + +Use the migration report to assess the fidelity of the migration and to determine whether to: + +- Fix data in the source AT instance and conduct the migration again, or +- Fix data in the target ArchivesSpace instance. + +If you select to fix the data in AT and conduct the migration again, you will need to delete all the content in the ArchivesSpace instance. + +If you accept the migration in the ArchivesSpace instance, the following outlines how to check and fix your data. + +- Re-establish user passwords. While user records will migrate, the passwords associated with them will not. You will need to re-assign those passwords according to the policies or conventions of your repositories. +- Review closely the set of sample records you selected: + - Accessions + - Resources + - Digital objects +- Review the following groups of records, making sure the correct number of records migrated: + - Accessions + - Assessments + - Resources + - Digital objects + - Controlled vocabulary lists + - Subjects + - Agents (Name records in AT) + - Locations + - Collection Management Classifications + - There may be a few extra agent records due to ArchivesSpace defaults or extra assessments if they were linked to records from more than one repository. +- In conducting the reviews, look for duplicate or incomplete records, broken links, or truncated data. +- Take special care to check to make sure your container data and locations are correct. The model for this is significantly different between AT and ArchivesSpace (where locations are tied to a container rather than directly to a resource or accession), so this presents some challenges for migration. +- Merge enumeration values as necessary. For instance, if you had both 'local' and 'local sources' as a source for names, it might be a good idea to merge these values. diff --git a/src/content/docs/de/migrations/migrate_from_archon.md b/src/content/docs/de/migrations/migrate_from_archon.md new file mode 100644 index 0000000..f0402fb --- /dev/null +++ b/src/content/docs/de/migrations/migrate_from_archon.md @@ -0,0 +1,180 @@ +--- +title: Migrating from Archon +description: Guidelines are for migrating data from Archon 3.21-rev3 to all ArchivesSpace 2.2.2 using the migration tool provided by ArchivesSpace. +--- + +These guidelines are for migrating data from Archon 3.21-rev3 to all ArchivesSpace 2.2.2 using the migration tool provided by ArchivesSpace. Migrations of data from earlier versions of the Archon or other versions of ArchivesSpace are not supported by these guidelines or migration tool. + +> Note: A migration from Archon to ArchivesSpace should not be run against an active production database. + +## Preparing for migration + +Select a representative sample of accession, classification, collection, collection content, and digital object records to be examined closely when the migration is completed. Make sure to include both simple and more complicated or extensive records in the sample. + +Review your Archon database for data quality + +### Accession Records + +- Supply an accession date for all records, when possible. If an accession date is not + recorded in Archon, the date of 01/01/9999 will be supplied during the migration process. If you wish to change this default value, you may do so by editing the following file in the new Archon distribution, prior to running the migration: + `packages/core/templates/default/accession-list.inc.php` +- Supply an identifier for all records, when possible. If an identifier is not recorded in Archon, a supplied identifier will be constructed during the migration process, consisting of the date and the truncated accession title. + +### Classification Records + +Ensure that there are no duplicate classification titles at the same level in the classification hierarchy. If the migration tool encounters a duplicate value, some of the save operations for classifications will fail, and you will need to redo the migration. + +### Collection Records + +If normalized dates are not recorded correctly (i.e. if the end date and begin date are reversed), they will not be migrated or may cause the migration to fail. To check for such entries, a system administrator can run the follow query against the database: + +`SELECT ID, Title, NormalDateBegin, NormalDateEnd FROM tblCollections_Collections WHERE NormalDateBegin > NormalDateEnd;` + +### Level/Container Manager + +Review the settings to make sure that each 'level container' is appropriately marked with the correct values for "Intellectual Level" and "Physical Container" and that EAD Values are correctly recorded. + +![Level Container Manager](../../../../images/archon_level.jpg) + +Failure to code level container values correctly may result in incorrect nesting of resource components in ArchivesSpace. While the following information does not need to be acted upon prior to migration, please note the following if you find that content is not nested correctly after you migrate: + +- Collection content records that have a level container that is 'Intellectual Only' will be migrated to ArchivesSpace as resource components. Each level/container that has 'intellectual level' checked should have a valid value recorded in the "EAD Level" field (i.e. class, collection, file, fonds, item, otherlevel, recordgrp, series, subfonds, subgrp, subseries). These values are case sensitive, and all other values will be migrated as "otherlevel" on the collection content/resource component records to which they apply. +- Collection content records that have a level container that is 'Physical Only' will be migrated to ArchivesSpace as instance records of the type 'text' attached to a container in ArchivesSpace. These instance/container records will be attached to the intellectual level or levels that are immediate children of the container record as it was previously expressed in Archon. If the instance/container has no children it will be attached to its parent intellectual level instead. For illustrative purposes, the following screenshots show a container record prior to and following migration. + ![Archon container example](../../../../images/archon_container.jpg) +- Collection content records that have both physical and intellectual levels will be migrated as both resource components and instances. In this case the instance will be attached to the resource component. +- Collection content records that are neither physical nor intellectual levels will be migrated as if they were 'Intellectual Only'. This is not recommended and should be fixed prior to migration. + +### Collection Content Records + +- If a value has not been set in the "Title" or "Inclusive Dates" field of an "intellectual" level/container in Archon, the collection content record being migrated will be supplied a title, based on its "label" value and the "level/container" type set in Archon. + ![Collection Content Records](../../../../images/archon_collection.jpg) +- Optionally, if a migration fails, check for collection content records that reference invalid 'level/containers'. These records are found in the database tables, but are not visible to staff or end users and must be eliminated prior to migration. If not eliminated, the migration will fail. In order to identify these records, you should follow these steps. **Be very careful. If you are uncertain what you are doing, backup the database first or speak with a systems administrator!** +- In MySQL or SQL Server, open the table titled 'tblCollections_LevelContainers'. Note the 'ID' value recorded of each row (i.e. LevelContainer). +- Run a query against tblCollections_Content to find records where the LevelID column references an invalid value. For example, if tblCollections_Level Containers holds 'ID' values1-6 and 8-22: + `SELECT * FROM tblCollections_Content WHERE LevelContainerID > 22 OR (LevelContainerID > 6 AND LevelContainerID < 8);` + This will provide a list of all records with invalid 'LevelID' (i.e. where a record with the primary key referenced by a foreign key cannot be found). Review this list carefully to make sure you are comfortable deleting the records, or change the LevelID to a valid integer if you wish to retain the records. If you choose to delete the records, you will need to do so directly in the database (see below.) If you choose to do the latter, you may need to take additional steps directly in the database to link these records to a valid parent content record or collection; additional instructions can be supplied upon request. +- Run a query to delete the invalid records from the collections content table. For example: + `DELETE FROM tblCollections_Content WHERE LevelContainerID > 22 OR (LevelContainerID > 6 AND LevelContainerID < 8);` +- Optionally, if the migration fails, check for 'duplicate' collection content records. 'Duplicate' records are those that occupy the same node in the collection/content hiearchy. To check for these records, run the following query in mysql or sql server. + `SELECT ParentID, SortOrder, COUNT (*) FROM tblCollections_Content GROUP BY ParentID, SortOrder HAVING COUNT(*) > 1;` +- The query above checks for records that occupy the same branch and same position in the content hierarchy. If you discover such records, the sort order value of one of the records must be changed, so that both records occupy a unique position. In order to do this, run a query that finds all records attached to the parent record, then run an update query to change the sort order of one of the offending records so that each has a unique sort order. For example if the query above returns ParentID as a 'duplicate' value, you would run query one with the appropriate ParentID value to identify the offending records, and query two to fix the problem: + **Query one:** + + `SELECT ID, ParentID, SortOrder, Title FROM tblCollections_Content WHERE ParentID=8619;` + + | ID | ParentID | SortOrder | Title | + | ---- | -------- | --------- | ----------- | + | 8620 | 8619 | 1 | to mother | + | 8621 | 8619 | 1 | from mother | + | 8622 | 8619 | 3 | to father | + | 6823 | 8619 | 4 | from father | + + **Query two:** + + `UPDATE tblCollections_Content SET SortOrder=2 WHERE ID=8621;` + +## Preparing for Migrating Archon Data + +The migration process is iterative in nature. You should plan to do several test migrations, culminating in a final migration. Typically, migration will require assistance from a system administrator. + +The migration tool will connect to your Archon installation, read data from defined 'endpoints', and place the information in a target ArchivesSpace instance. + +A migration report is generated at the end of each migration routine and can be downloaded from the application. The report indicates errors or issues occurring with the migration. Sample data from migration report is provided in [Appendix A](#Appendix-A%3A-Migration-Log-Review). + +You should use this report to determine if any problems observed in the migration results are best remedied in the source data or in the migrated data in the ArchivesSpace instance. If you address the problems in the source data, then you can simply clear the database and conduct the migration again. However, once you accept the migration and make changes to the migrated data in ArchivesSpace, you cannot migrate the source data again without either overwriting the previous migration or establishing a new target ArchivesSpace instance. + +Please note, data migration can be a very memory and time intensive task due to the large amounts of records being transferred. As such, we recommend running the Archon migration tool on a server with at least 2GB of available memory. Test migrations have run from under an hour to twelve hours or more in the case of complex and large instances of Archon. + +Before starting the migration process, make sure that your current Archon installation is up to date: i.e. that you are using version 3.21 rev3. If you are on an earlier version of Archon, make a copy of the Archon instance, including the database, to be migrated and use it as the source of the migration. It is strongly recommended that you not use your Archon production instance and database as the source of the migration for the simple reason of protecting the production version from any anomalies that might occur during the migration process. Upgrade the copy of the Archon instance to version 3.21 rev3 prior to starting the migration process. + +### Get Archon to ArchivesSpace Migration Tool + +Download the latest JAR file release from https://github.com/archivesspace-deprecated/ArchonMigrator/releases/latest. This is an executable JAR file – double click to run it. + +### Install ArchivesSpace Instance + +Implement an ArchivesSpace production version including the setting up of a MySQL database to migrate into. Instructions are included at [Getting Started with ArchivesSpace](/administration/getting_started) and [Running ArchivesSpace against MySQL](/provisioning/mysql) + +### Prepare to Launch Migration + +> **Important Note:** The migration process should be launched from a networked computer with a stable (i.e. wired) connection, and you should turn power save settings off on the client computer you use to launch the migration. So that the migration can proceed in an undisturbed fashion, you should not try to access the ArchivesSpace or Archon front end or public interface until after the migration as completed. **If you fail to follow these instructions, the migration tool may not provide useful feedback and it will be difficult to determine how successful the migration was.** + +For the most part, the data migration process should be automatic, with errors being provided as the tool migrates and a log being made available when migration is complete. Depending on the particular data being migrated, various errors may occur These may require the migration to be re-run after they have been resolved by the user. When this occurs, the MySQL database should be emptied by the system administrator, and the migration rerun after steps are taken to resolve the problem that caused the error. + +The time that the migration takes to complete will depend on a number of factors (database size, network performance etc.), but has been known to take anywhere from a half hour to ten or twelve hours. Most of this time will probably be spent migrating collection records. + +The following Archon datatypes will migrate, and all relationships that exist between these datatypes should be preserved in ArchivesSpace, except as noted in bold below. For each datatype, post- migration cleanup recommendations are provided in parentheses: + +- Editable controlled value lists: + - Subject sources (review post migration and merge values with ArchivesSpace defaults or functionally duplicate values, when possible) + - Creatorsources(reviewpostmigrationandmergevalueswithArchivesSpacedefaults + or functionally duplicate values, when possible) + - Extentunits/types(mergefunctionallyduplicatevalues) o MaterialTypes + - ContainerTypes + - FileTypes + - ProcessingPriorities +- Repositories +- User/logins (users will need to reset password) +- Subjects (subjects of type personal corporate or family name are migrated as Agent + records, and are linked to resources and digital objects in the subject role. Review these + records and merge with duplicate agent names from creator migration, when possible.) +- Creators/Names +- Accessions (The migration tool will supply accession identifiers when these are blank in Archon. Review and change values, if appropriate.) +- Digital Objects: The migration tool will generate digital object metadata records in ArchivesSpace for each digital library record that is stored in your Archon instance. For each file that has an attached digital library record, the migration tool will generate a digital object component and file instance record. In addition, the migration tool will provide a folder containing the source file you uploaded to Archon when you created the record. In order to link these files to the digital file records in ArchivesSpace, you should place the files in a single directory on a webserver. + **To preserve the linkage between the file's metadata in ArchivesSpace, you must provide the base URL to the folder where the objects will be placed.** The migration tool prepends this URL to the filename to form a complete path to the object location, for each file being exported, as shown in the screenshot below. (In version 2.2.2 of ArchivesSpace, with the default digital object templates, these files will be available in the public interface by clicking a link.) +- Locations (Controlled location records are much more granular in ArchivesSpace than in Archon. You should have a location record for each unique combination of location drop down, range, section, and shelf in Archon, and these records should be linked to top container records which are in turn linked to an instance for each collection where they apply.) +- Resources and Resource Components (see locations, above). + Data from the following Archon modules will not migrate to ArchivesSpace +- Books (Book data could be migrated later if a plugin is developed to support this data). +- AVSAP/Assessments + +## Launch Migration Process + +Make sure the ArchivesSpace instance that you are migrating into is up and running, then open up the migration tool. + +![Archon migrator](../../../../images/archon_migrator.jpg) + +1. Change the default information in the migration tool user interface: + - ArchonSource – Supply the base URL for the Archon instance. + - Archon User – Username for an account with full administrator privileges. + - Password – Password for that same account. + - Download Digital Object Files checkbox – Check if you want to move any attached digital object files and supply a webpath to a web accessible folder where you intend to place the digital objects after the migration is complete. + - Set Download Folder – Clicking this will open a file explorer that will allow you to specify the folder to which you want digital files from Archon to be downloaded. + - Set Default Repository checkbox -- Select "Set Default Repository" checkbox to set which Repository Accession and Unlinked digital objects are copied to. The default is "Based on Linked Collection," which will copy Accession records to the same repository of any Collection records they are linked to, or the first repository if they are not. You can also select a specific repository from the drop-down list. + - Host – The URL and port number of the ArchivesSpace backend server. + - ASpace admin – User name for the ArchivesSpace "admin" account. The default value of "admin" should work unless it was changed by the ArchivesSpace administrator. + - Password – Password for the ArchivesSpace "admin" account. The default value of "admin" should work unless it was changed by the ArchivesSpace administrator. + - Reset Password – Each user account transferred has its password reset to this. Please note that users need to change their password when they first log-in unless LDAP is used for authentication. + - Migration Options – This is needed for the functioning of the migration tool. Please do not make changes to this area. + - Output Console – Display section for following the migration while it is running + - View Error Log – Used to view a printout of all the errors encountered during the migration process. This can be used while the migration process is underway as well. +2. Press the "Copy to ArchivesSpace" button to start the migration process. This starts the migration to the ArchivesSpace instance indicated by the Host URL. +3. If the migration process fails: Review the error message provided and /or the migration log. Fix any issues that have been identified, clear the target MySQL and try again. +4. When the process has completed: + - Download the migration report. + - Move digital objects into the folder location corresponding to the URL you provided to the migration tool. + +## Assessing the Migration and Cleaning Up Data + +1. Use the migration report to assess the fidelity of the migration and to determine whether to fix data in the source Archon instance and conduct the migration again, or fix data in the target ArchivesSpace instance. If you select to fix data in Archon, you will need to delete all the content in the ArchivesSpace instance, then rerun the migration after clearing the ArchivesSpace database. +2. Review the following record types, making sure the correct number of records migrated. In conducting the reviews, look for duplicate or incomplete records, broken links, or truncated data. + - Controlled vocabulary lists + - Classifications + - Accessions + - Resources + - Digital objects + - Subjects (not persons, families, and corporate bodies) + - Creators (known as Agents in ArchivesSpace) + - Locations +3. Review closely the set of sample records you selected, comparing data in Archon to data in ArchivesSpace. +4. If you accept the migration in the ArchivesSpace instance, then proceed to re-establish user passwords. While user records will migrate, the passwords associated with them will not. You will need to reassign those passwords according to the policies or conventions of your repositories. + +## Appendix A: Migration Log Review + +The migration log provides a description of any irregularities that take place during a migration and should be saved in a secure location, for future reference. The log contains both save errors and warnings. The warnings should be reviewed after the migration for information, for potential action. + +Most warnings will not require a follow up action. For example, they may note that a supplied value has been provided to meet an ArchivesSpace data model requirement. This occurs for all collections with empty identifiers. Occasionally, warnings will indicate that there was a problem establishing a link between two records for a reason such as a resource component not being found. Warnings like this should be cause for review since they may indicate that some data was lost. + +Save errors will note that a particular piece of data could not be migrated because it is not supported in the ArchivesSpace data model or for some other reason. In these cases, you should review the record in Archon and in ArchivesSpace if it was migrated at all. Oftentimes, these occur due to duplicate records (such as if you have a matching creator and person subject). If a save error occurs due to a duplicate record, this is usually okay but should still be reviewed to make sure there was no data loss. If a save error occurs for any other reason, this typically means the migration will need to be rerun (unless the record it occurred on is not needed or is easier just to migrate manually). + +Typically, the migration log will record the Archon internal IDs of the original Archon object being migrated whenever a save error or warning occurs. This simplifies finding and correcting relevant records. diff --git a/src/content/docs/de/migrations/migration_tools.md b/src/content/docs/de/migrations/migration_tools.md new file mode 100644 index 0000000..523f0e4 --- /dev/null +++ b/src/content/docs/de/migrations/migration_tools.md @@ -0,0 +1,59 @@ +--- +title: Migration tools +description: Links to tools for migrating data into and out of ArchivesSpace. +--- + +## Archivists' Toolkit + +- [AT migration tool instructions](/migrations/migrate_from_archivists_toolkit) +- [AT migration plugin](https://github.com/archivesspace/at-migration/releases) +- [AT migration source code](https://github.com/archivesspace/at-migration) +- [AT migration mapping (for 2.x versions of the tool and ArchivesSpace](https://github.com/archivesspace/at-migration/blob/master/docs/ATMappingDocument.xlsx) + +### Older information + +- [AT migration guidelines (for migrations using the original migration tool through version 1.4.2; only supports migrations to version 1.4.2 or lower of ArchivesSpace)](http://archivesspace.org/wp-content/uploads/2016/08/ATMigrationGuidelines-REV-20140417.pdf) +- [AT migration mapping (for migrations through version 1.4.2 or lower of the tool and ArchivesSpace)](http://archivesspace.org/wp-content/uploads/2016/08/ATMappingDocument_AT-ASPACE_BETA.xls) + +## Archon + +- [Archon migration tool instructions](/migrations/migrate_from_archon) +- [Archon migration tool](https://github.com/archivesspace/archon-migration/releases/latest) +- [Archon migration source code](https://github.com/archivesspace/archon-migration/) +- [Archon migration mapping (for 2.x versions of the tool and ArchivesSpace)](https://docs.google.com/spreadsheets/d/13soN5djk16QYmRoSajtyAc_nBrNldyL58ViahKFJAog/edit?usp=sharing) + +### Older information + +- [refactored Archon migration plugin](https://github.com/archivesspace-deprecated/ArchonMigrator/releases) +- [information about refactoring project](https://archivesspace.atlassian.net/browse/AR-1278) +- [previous Archon migration plugin](https://github.com/archivesspace/archon-migration/releases) +- [Plugin read me text](https://github.com/archivesspace-deprecated/ArchonMigrator/blob/master/README.md) +- [Archon migration guidelines](http://archivesspace.org/wp-content/uploads/2016/05/Archon_Migration_Guidelines-7_13_2017.docx) +- [Archon migration mapping](http://archivesspace.org/wp-content/uploads/2016/08/ArchonSchemaMappingsPublic.xlsx) + +## Data Import and Export Maps + +- [Accession CSV Map](http://archivesspace.org/wp-content/uploads/2016/05/Accession-CSV-mapping-2013-08-05.xlsx) +- [Accession CSV Template](https://github.com/archivesspace/archivesspace/tree/master/templates) +- [Archival Objects from Excel or CSV with Load Via Spreadsheet](https://github.com/archivesspace/archivesspace/tree/master/templates) +- [Assessment CSV Template](https://github.com/archivesspace/archivesspace/tree/master/templates) +- [Digital Object CSV Map](http://archivesspace.org/wp-content/uploads/2016/08/DigitalObject-CSV-mapping-2013-02-26.xlsx) +- [Digital Object CSV Template](https://github.com/archivesspace/archivesspace/tree/master/templates) +- [Digital Objects Export Maps](http://archivesspace.org/wp-content/uploads/2016/08/ASpace-Dig-Object-Exports.xlsx) +- [EAD Import / Export Map](https://archivesspace.org/wp-content/uploads/2021/06/EAD-Import-Export-Mapping-20171030.xlsx) +- [Location Record CSV Template](https://github.com/archivesspace/archivesspace/tree/master/templates) +- (newly reviewed) [MARCXML Import Map](https://archivesspace.org/wp-content/uploads/2021/06/AS-MARC-import-mappings-2021-06-15.xlsx) +- [MARCXML Export Map](https://archivesspace.org/wp-content/uploads/2021/06/MARCXML-Export-Mapping-20130715.xlsx) +- [MARCXML Authority Import / Export Map](https://archivesspace.org/wp-content/uploads/2021/05/Agents-ASpace-to-MARCXMLMay2021.xlsx) +- [EAC-CPF Import / Export Map](https://archivesspace.org/wp-content/uploads/2021/05/Agents-ASpace-to-EAC-CPFMay2021.xlsx) + +(newly reviewed) MARCXML Import Map +MARCXML Export Map + +### OAI-PMH-only maps + +Most ArchivesSpace OAI-PMH responses are based on the export maps above, but there are a few that are only available through OAI-PMH + +[MODS for resources and resource components](https://archivesspace.org/wp-content/uploads/2019/06/MODS-OAI-Export-Mapping-20190610.xlsx) +[Dublin Core for resources and resource components](https://archivesspace.org/wp-content/uploads/2019/06/DC-OAI-Export-Mapping-20190610.xlsx) +[DCMI Metadata Terms for resources and resource components](https://archivesspace.org/wp-content/uploads/2019/06/DCTerms-OAI-Export-Mapping-20190611.xlsx) diff --git a/src/content/docs/de/provisioning/clustering.md b/src/content/docs/de/provisioning/clustering.md new file mode 100644 index 0000000..db73b24 --- /dev/null +++ b/src/content/docs/de/provisioning/clustering.md @@ -0,0 +1,370 @@ +--- +title: Load balancing and multiple tenants +description: Guidelines for running ArchivesSpace in a clustered environment for load-balancing purposes, and for supporting multiple tenants. +--- + +This document describes two aspects of running ArchivesSpace in a +clustered environment: for load-balancing purposes, and for supporting +multiple tenants (isolated installations of the system in a common +deployment environment). + +The configuration described in this document is one possible approach, +but it is not intended to be prescriptive: the application layer of +ArchivesSpace is stateless, so any mechanism you prefer for load +balancing across web applications should work just as well as the one +described here. + +Unless otherwise stated, it is assumed that you have root access on +your machines, and all commands are to be run as root (or with sudo). + +## Architecture overview + +This document assumes an architecture with the following components: + +- A load balancer machine running the Nginx web server +- Two application servers, each running a full ArchivesSpace + application stack +- A MySQL server +- A shared NFS volume mounted under `/aspace` on each machine + +## Overview of files + +The `files` directory in this repository (in the same directory as this +`README.md`) contains what will become the contents of the `/aspace` +directory, shared by all servers. It has the following layout: + + /aspace + ├── archivesspace + │   ├── config + │   │   ├── config.rb + │   │   └── tenant.rb + │   ├── software + │   └── tenants + │   └── \_template + │   └── archivesspace + │   ├── config + │   │   ├── config.rb + │   │   └── instance_hostname.rb.example + │   └── init_tenant.sh + └── nginx + └── conf + ├── common + │   └── server.conf + └── tenants + └── \_template.conf.example + +The highlights: + +- `/aspace/archivesspace/config/config.rb` -- A global configuration file for all ArchivesSpace instances. Any configuration options added to this file will be applied to all tenants on all machines. +- `/aspace/archivesspace/software/` -- This directory will hold the master copies of the `archivesspace.zip` distribution. Each tenant will reference one of the versions of the ArchivesSpace software in this directory. +- `/aspace/archivesspace/tenants/` -- Each tenant will have a sub-directory under here, based on the `_template` directory provided. This holds the configuration files for each tenant. +- `/aspace/archivesspace/tenants/[tenant name]/config/config.rb` -- The global configuration file for [tenant name]. This contains tenant-specific options that should apply to all of the tenant's ArchivesSpace instances (such as their database connection settings). +- `/aspace/archivesspace/tenants/[tenant name]/config/instance_[hostname].rb` -- The configuration file for a tenant's ArchivesSpace instance running on a particular machine. This allows configuration options to be set on a per-machine basis (for example, setting different ports for different application servers) +- `/aspace/nginx/conf/common/server.conf` -- Global Nginx configuration settings (applying to all tenants) +- `/aspace/nginx/conf/tenants/[tenant name].conf` -- A tenant-specific Nginx configuration file. Used to set the URLs of each tenant's ArchivesSpace instances. + +## Getting started + +We'll assume you already have the following ready to go: + +- Three newly installed machines, each running RedHat (or CentOS) + Linux (we'll refer to these as `loadbalancer`, `apps1` and + `apps2`). +- A MySQL server. +- An NFS volume that has been mounted as `/aspace` on each machine. + All machines should have full read/write access to this area. +- An area under `/aspace.local` which will store instance-specific + files (such as log files and Solr indexes). Ideally this is just + a directory on local disk. +- Java 1.6 (or above) installed on each machine. + +### Populate your /aspace/ directory + +Start by copying the directory structure from `files/` into your +`/aspace` volume. This will contain all of the configuration files +shared between servers: + +```shell +mkdir /var/tmp/aspace/ +cd /var/tmp/aspace/ +unzip -x /path/to/archivesspace.zip +cp -av archivesspace/clustering/files/* /aspace/ +``` + +You can do this on any machine that has access to the shared +`/aspace/` volume. + +### Install the cluster init script + +On your application servers (`apps1` and `apps2`) you will need to +install the supplied init script: + +```shell +cp -a /aspace/aspace-cluster.init /etc/init.d/aspace-cluster +chkconfig --add aspace-cluster +``` + +This will start all configured instances when the system boots up, and +can also be used to start/stop individual instances. + +### Install and configure Nginx + +You will need to install Nginx on your `loadbalancer` machine, which +you can do by following the directions at +http://nginx.org/en/download.html. Using the pre-built packages for +your platform is fine. At the time of writing, the process for CentOS +is simply: + +```shell +wget http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm +rpm -i nginx-release-centos-6-0.el6.ngx.noarch.rpm +yum install nginx +``` + +Nginx will place its configuration files under `/etc/nginx/`. For +now, the only change we need to make is to configure Nginx to load our +tenants' configuration files. To do this, edit +`/etc/nginx/conf.d/default.conf` and add the line: + +``` +include /aspace/nginx/conf/tenants/\*.conf; +``` + +_Note:_ the location of Nginx's main config file might vary between +systems. Another likely candidate is `/etc/nginx/nginx.conf`. + +### Download the ArchivesSpace distribution + +Rather than having every tenant maintain their own copy of the +ArchivesSpace software, we put a shared copy under +`/aspace/archivesspace/software/` and have each tenant instance refer +to that copy. To set this up, run the following commands on any one +of the servers: + +```shell +cd /aspace/archivesspace/software/ +unzip -x /path/to/downloaded/archivesspace-x.y.z.zip +mv archivesspace archivesspace-x.y.z +ln -s archivesspace-x.y.z stable +``` + +Note that we unpack the distribution into a directory containing its +version number, and then assign that version the symbolic name +"stable". This gives us a convenient way of referring to particular +versions of the software, and we'll use this later on when setting up +our tenant. + +We'll be using MySQL, which means we must make the MySQL connector +library available. To do this, place it in the `lib/` directory of +the ArchivesSpace package: + +```shell +cd /aspace/archivesspace/software/stable/lib +wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.24/mysql-connector-java-5.1.24.jar +``` + +## Defining a new tenant + +With our server setup out of the way, we're ready to define our first +tenant. As shown in _Overview of files_ above, each tenant has their +own directory under `/aspace/archivesspace/tenants/` that holds all of +their configuration files. In defining our new tenant, we will: + +- Create a Unix account for the tenant +- Create a database for the tenant +- Create a new set of ArchivesSpace configuration files for the + tenant +- Set up the database + +Our newly defined tenant won't initially have any ArchivesSpace +instances, but we'll set those up afterwards. + +To complete the remainder of this process, there are a few bits of +information you will need. In particular, you will need to know: + +- The identifier you will use for the tenant you will be creating. + In this example we use `exampletenant`. +- Which port numbers you will use for the application's backend, + Solr instance, staff and public interfaces. These must be free on + your application servers. +- If running each tenant under a separate Unix account, the UID and + GID you'll use for them (which must be free on each of your + servers). +- The public-facing URLs for the new tenant. We'll use + `staff.example.com` for the staff interface, and `public.example.com` + for the public interface. + +### Creating a Unix account + +Although not strictly required, for security and ease of system +monitoring it's a good idea to have each tenant instance running under +a dedicated Unix account. + +We will call our new tenant `exampletenant`, so let's create a user +and group for them now. You will need to run these commands on _both_ +application servers (`apps1` and `apps2`): + +```shell +groupadd --gid 2000 exampletenant +useradd --uid 2000 --gid 2000 exampletenant +``` + +Note that we specify a UID and GID explicitly to ensure they match +across machines. + +### Creating the database + +ArchivesSpace assumes that each tenant will have their own MySQL +database. You can create this from the MySQL shell: + +```sql +create database exampletenant default character set utf8; +grant all on exampletenant.* to 'example'@'%' identified by 'example123'; +``` + +In this example, we have a MySQL database called `exampletenant`, and +we grant full access to the user `example` with password `example123`. +Assuming our database server is `db.example.com`, this corresponds to +the database URL: + +``` +jdbc:mysql://db.example.com:3306/exampletenant?user=example&password=example123&useUnicode=true&characterEncoding=UTF-8 +``` + +We'll make use of this URL in the following section. + +### Creating the tenant configuration + +Each tenant has their own set of files under the +`/aspace/archivesspace/tenants/` directory. We'll define our new +tenant (called `exampletenant`) by copying the template set of +configurations and running the `init_tenant.sh` script to set them +up. We can do this on either `apps1` or `apps2`--it only needs to be +done once: + +```shell +cd /aspace/archivesspace/tenants +cp -a \_template exampletenant +``` + +Note that we've named the tenant `exampletenant` to match the Unix +account it will run as. Later on, the startup script will use this +fact to run each instance as the correct user. + +For now, we'll just edit the configuration file for this tenant, under +`exampletenant/archivesspace/config/config.rb`. When you open this file you'll see two +placeholders that need filling in: one for your database URL, which in +our case is just: + +``` +jdbc:mysql://db.example.com:3306/exampletenant?user=example&password=example123&useUnicode=true&characterEncoding=UTF-8 +``` + +and the other for this tenant's search, staff and public user secrets, +which should be random, hard to guess passwords. + +## Adding the tenant instances + +To add our tenant instances, we just need to initialize them on each +of our servers. On `apps1` _and_ `apps2`, we run: + +```shell +cd /aspace/archivesspace/tenants/exampletenant/archivesspace +./init_tenant.sh stable +``` + +If you list the directory now, you will see that the `init_tenant.sh` +script has created a number of symlinks. Most of these refer back to +the `stable` version of the ArchivesSpace software we unpacked +previously, and some contain references to the `data` and `logs` +directories stored under `/aspace.local`. + +Each server has its own configuration file that tells the +ArchivesSpace application which ports to listen on. To set this up, +make two copies of the example configuration by running the following +command on `apps1` then `apps2`: + +```shell +cd /aspace/archivesspace/tenants/exampletenant/archivesspace +cp config/instance_hostname.rb.example config/instance_`hostname`.rb +``` + +Then edit each file to set the URLs that the instance will use. +Here's our `config/instance_apps1.example.com.rb`: + +```ruby +{ + :backend_url => "http://apps1.example.com:8089", + :frontend_url => "http://apps1.example.com:8080", + :solr_url => "http://apps1.example.com:8090", + :indexer_url => "http://apps1.example.com:8091", + :public_url => "http://apps1.example.com:8081", +} +``` + +Note that the filename is important here: it must be: + +``` +instance_[server hostname].rb +``` + +These URLs will determine which ports the application listens on when +it starts up, and are also used by the ArchivesSpace indexing system +to track updates across the cluster. + +### Starting up + +As a one-off, we need to populate this tenant's database with the +default set of tables. You can do this by running the +`setup-database.sh` script on either `apps1` or `apps2`: + +```shell +cd /aspace/archivesspace/tenants/exampletenant/archivesspace +scripts/setup-database.sh +``` + +With the two instances configured, you can now use the init script to +start them up on each server: + +```shell +/etc/init.d/aspace-cluster start-tenant exampletenant +``` + +and you can monitor each instance's log file under +`/aspace.local/tenants/exampletenant/logs/`. Once they're started, +you should be able to connect to each instance with your web browser +at the configured URLs. + +## Configuring the load balancer + +Our final step is configuring Nginx to accept requests for our staff +and public interfaces and forward them to the appropriate application +instance. Working on the `loadbalancer` machine, we create a new +configuration file for our tenant: + +```shell +cd /aspace/nginx/conf/tenants +cp -a \_template.conf.example exampletenant.conf +``` + +Now open `/aspace/nginx/conf/tenants/exampletenant.conf` in an +editor. You will need to: + +- Replace `<tenantname>` with `exampletenant` where it appears. +- Change the `server` URLs to match the hostnames and ports you + configured each instance with. +- Insert the tenant's hostnames for each `server_name` entry. In + our case these are `public.example.com` for the public interface, and + `staff.example.com` for the staff interface. + +Once you've saved your configuration, you can test it with: + + /usr/sbin/nginx -t + +If Nginx reports that all is well, reload the configurations with: + + /usr/sbin/nginx -s reload + +And, finally, browse to `http://public.example.com/` to verify that Nginx +is now accepting requests and forwarding them to your app servers. +We're done! diff --git a/src/content/docs/de/provisioning/domains.md b/src/content/docs/de/provisioning/domains.md new file mode 100644 index 0000000..9fa0d3e --- /dev/null +++ b/src/content/docs/de/provisioning/domains.md @@ -0,0 +1,85 @@ +--- +title: Serving over subdomains +description: How to configure ArchivesSpace and your web server to serve the application over subdomains. +--- + +This document describes how to configure ArchivesSpace and your web server to serve the application over subdomains (e.g., `http://staff.myarchive.org/` and `http://public.myarchive.org/`), which is the recommended +practice. Separate documentation is available if you wish to [serve ArchivesSpace under a prefix](/provisioning/prefix) (e.g., `http://aspace.myarchive.org/staff` and +`http://aspace.myarchive.org/public`). + +1. [Configuring Your Firewall](#Step-1%3A-Configuring-Your-Firewall) +2. [Configuring Your Web Server](#Step-2%3A-Configuring-Your-Web-Server) + - [Apache](#Apache) + - [Nginx](#Nginx) +3. [Configuring ArchivesSpace](#Step-3%3A-Configuring-ArchivesSpace) + +## Step 1: Configuring Your Firewall + +Since using subdomains negates the need for users to access the application directly on ports 8080 and 8081, these should be locked down to access by localhost only. On a Linux server, this can be done using iptables: + +```shell +iptables -A INPUT -p tcp -s localhost --dport 8080 -j ACCEPT +iptables -A INPUT -p tcp --dport 8080 -j DROP +iptables -A INPUT -p tcp -s localhost --dport 8081 -j ACCEPT +iptables -A INPUT -p tcp --dport 8081 -j DROP +``` + +## Step 2: Configuring Your Web Server + +### Apache + +The [mod_proxy module](https://httpd.apache.org/docs/2.4/mod/mod_proxy.html) is necessary for Apache to route public web traffic to ArchivesSpace's ports as designated in your config.rb file (ports 8080 and 8081 by default). + +This can be set up as a reverse proxy in the Apache configuration like so: + +```apache +<VirtualHost *:80> +ServerName public.myarchive.org +ProxyPass / http://localhost:8081/ +ProxyPassReverse / http://localhost:8081/ +</VirtualHost> + +<VirtualHost *:80> +ServerName staff.myarchive.org +ProxyPass / http://localhost:8080/ +ProxyPassReverse / http://localhost:8080/ +</VirtualHost> +``` + +The purpose of ProxyPass is to route _incoming_ traffic on the public URL (public.myarchive.org) to port 8081 of your server, where ArchivesSpace's public interface sits. The purpose of ProxyPassReverse is to intercept _outgoing_ traffic and rewrite the header to match the URL that the browser is expecting to see (public.myarchive.org). + +### nginx + +Using nginx as a reverse proxy needs a configuration file like so: + +```nginx +server { +listen 80; +listen [::]:80; +server_name staff.myarchive.org; +location / { + proxy_pass http://localhost:8080/; + } +} + server { +listen 80; +listen [::]:80; +server_name public.myarchive.org; +location / { + proxy_pass http://localhost:8081/; + } +} +``` + +## Step 3: Configuring ArchivesSpace + +The only configuration within ArchivesSpace that needs to occur is adding your domain names to the following lines in config.rb: + +```ruby +AppConfig[:frontend_proxy_url] = 'http://staff.myarchive.org' +AppConfig[:public_proxy_url] = 'http://public.myarchive.org' +``` + +This configuration allows the staff edit links to appear on the public site to users logged in to the staff interface. + +Do **not** change `AppConfig[:public_url]` or `AppConfig[:frontend_url]`; these must retain their port numbers in order for the application to run. diff --git a/src/content/docs/de/provisioning/https.md b/src/content/docs/de/provisioning/https.md new file mode 100644 index 0000000..b02732c --- /dev/null +++ b/src/content/docs/de/provisioning/https.md @@ -0,0 +1,163 @@ +--- +title: Serving over HTTPS +description: Installing ArchivesSpace in such a manner that all end-user requests are served over HTTPS. +--- + +This document describes the approach for those wishing to install +ArchivesSpace in such a manner that all end-user requests (i.e., URLs) +are served over HTTPS rather than HTTP. For the purposes of this documentation, the URLs for the staff and public interfaces will be: + +- `https://staff.myarchive.org` - staff interface +- `https://public.myarchive.org` - public interface + +The configuration described in this document is one possible approach, +and to keep things simple the following are assumed: + +- ArchivesSpace is running on a single Linux server +- The server is running Apache or Nginx +- You have obtained an SSL certificate and key from an authority +- You have ensured that appropriate firewall ports have been opened (80 and 443). + +1. [Configuring the Web Server](<#Step-1%3A-Configure-Web-Server-(Apache-or-Nginx)>) + - [Apache](#Apache) + - [Setting up SSL](#Setting-up-SSL) + - [Setting up Redirects](#Setting-up-Redirects) + - [Nginx](#Nginx) +2. [Configuring ArchivesSpace](#Step-2%3A-Configure-ArchivesSpace) + +## Step 1: Configure Web Server (Apache or Nginx) + +### Apache + +Information about configuring Apache for SSL can be found at http://httpd.apache.org/docs/current/ssl/ssl_howto.html. You should read +that documentation before attempting to configure SSL. + +#### Setting up SSL + +Use the `NameVirtualHost` and `VirtualHost` directives to proxy +requests to the actual application urls. This requires the use of the `mod_proxy` module in Apache. + +```apache +NameVirtualHost *:443 + +<VirtualHost *:443> + ServerName staff.myarchive.org + SSLEngine On + SSLCertificateFile "/path/to/your/cert.crt" + SSLCertificateKeyFile "/path/to/your/key.key" + RequestHeader set X-Forwarded-Proto "https" + ProxyPreserveHost On + ProxyPass / http://localhost:8080/ + ProxyPassReverse / http://localhost:8080/ +</VirtualHost> + +<VirtualHost *:443> + ServerName public.myarchive.org + SSLEngine On + SSLCertificateFile "/path/to/your/cert.crt" + SSLCertificateKeyFile "/path/to/your/key.key" + RequestHeader set X-Forwarded-Proto "https" + ProxyPreserveHost On + ProxyPass / http://localhost:8081/ + ProxyPassReverse / http://localhost:8081/ +</VirtualHost> +``` + +You may optionally set the `Set-Cookie: Secure attribute` by adding `Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure`. When a cookie has the Secure attribute, the user agent will include the cookie in an HTTP request only if the request is transmitted over a secure channel. + +Users may encounter a warning in the browser's console stating `Cookie “archivesspace_session” does not have a proper “SameSite” attribute value. Soon, cookies without the “SameSite” attribute or with an invalid value will be treated as “Lax”. This means that the cookie will no longer be sent in third-party contexts` (example from Firefox 104) or something similar. Some browsers (for example, Chrome version 80 or above) already enforce this. Standard ArchivesSpace installations should be unaffected, but if you encounter problems with integrations and/or customizations of your particular installation, the following directive may be required: `Header edit Set-Cookie ^(.*)$ $1;SameSite=None;Secure`. Alternatively, it may be the case that `SameSite=Lax` (the default) or even `SameSite=Strict` are more appropriate depending on your functional and/or security requirements. Please refer to https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie/SameSite or other resources for more information. + +#### Setting up Redirects + +When running a site over HTTPS, it's a good idea to set up a redirect to ensure any outdated HTTP requests are routed to the correct URL. This can be done through Apache as follows: + +```apache +<VirtualHost *:80> +ServerName staff.myarchive.org +RewriteEngine On +RewriteCond %{HTTPS} off +RewriteRule (.*) https://staff.myarchive.org$1 [R,L] +</VirtualHost> + +<VirtualHost *:80> +ServerName public.myarchive.org +RewriteEngine On +RewriteCond %{HTTPS} off +RewriteRule (.*) https://public.myarchive.org$1 [R,L] +</VirtualHost> +``` + +### Nginx + +Information about configuring nginx for SSL can be found at http://nginx.org/en/docs/http/configuring_https_servers.html You should read +that documentation before attempting to configure SSL. + +```nginx + +server { + listen 80; + listen [::]:80; + server_name staff.myarchive.org; + return 301 https://staff.myarchive.org; +} + + +server { + listen 443 ssl; + server_name staff.myarchive.org; + charset utf-8; + } + + ssl_certificate /path/to/your/fullchain.pem; + ssl_certificate_key /path/to/your/key.pem + + location / { + allow 0.0.0.0/0; + deny all; + proxy_pass http://localhost:8081; + } +} + +server { + listen 80; + listen [::]:80; + server_name public.myarchive.org; + return 301 https://public.myarchive.org; +} + + +server { + listen 443 ssl; + server_name staff.myarchive.org; + charset utf-8; + } + + ssl_certificate /path/to/your/fullchain.pem; + ssl_certificate_key /path/to/your/key.pem + + location / { + allow 0.0.0.0/0; + deny all; + proxy_pass http://localhost:8080; + } +} + +``` + +## Step 2: Configure ArchivesSpace + +The following lines need to be altered in the config.rb file: + +```ruby +AppConfig[:frontend_proxy_url] = "https://staff.myarchive.org" +AppConfig[:public_proxy_url] = "https://public.myarchive.org" +``` + +These lines don't need to be altered and should remain with their default values. E.g.: + +```ruby +AppConfig[:frontend_url] = "http://localhost:8080" +AppConfig[:public_url] = "http://localhost:8081" +AppConfig[:frontend_proxy_prefix] = proc { "#{URI(AppConfig[:frontend_proxy_url]).path}/".gsub(%r{/+$}, "/") } +AppConfig[:public_proxy_prefix] = proc { "#{URI(AppConfig[:public_proxy_url]).path}/".gsub(%r{/+$}, "/") } +``` diff --git a/src/content/docs/de/provisioning/index.md b/src/content/docs/de/provisioning/index.md new file mode 100644 index 0000000..95ea9e7 --- /dev/null +++ b/src/content/docs/de/provisioning/index.md @@ -0,0 +1,15 @@ +--- +title: Provisioning and server configuration +description: The index to the provisioning section of the ArchivesSpace techinal documentation. +--- + +- [Running ArchivesSpace with load balancing and multiple tenants](./clustering.html) +- [Serving ArchivesSpace over subdomains](./domains.html) +- [Serving ArchivesSpace user-facing applications over HTTPS](./https.html) +- [JMeter Test Group Template](./jmeter.html) +- [Running ArchivesSpace against MySQL](./mysql.html) +- [Application monitoring with New Relic](./newrelic.html) +- [Running ArchivesSpace under a prefix](./prefix.html) +- [robots.txt](./robots.html) +- [Running ArchivesSpace with external Solr](./solr.html) +- [Tuning ArchivesSpace](./tuning.html) diff --git a/src/content/docs/de/provisioning/jmeter.md b/src/content/docs/de/provisioning/jmeter.md new file mode 100644 index 0000000..0373a4d --- /dev/null +++ b/src/content/docs/de/provisioning/jmeter.md @@ -0,0 +1,13 @@ +--- +title: JMeter Test Group Template +description: How to create a Jmeter Test Group. +--- + +## Creating a test group: + +Load the file 'example_test_plan.jmx' into JMeter and make sure the following are true for the example to run successfully: + +- The backend is running on localhost port 4567 +- There is at least one repository, and its url is /repositories/2 + +The example will log in to the backend, store the session key as a JMeter variable, and make two basic requests, one of which will require a session key. diff --git a/src/content/docs/de/provisioning/mysql.md b/src/content/docs/de/provisioning/mysql.md new file mode 100644 index 0000000..8ba110a --- /dev/null +++ b/src/content/docs/de/provisioning/mysql.md @@ -0,0 +1,89 @@ +--- +title: Using MySQL +description: Instructions for how to set up MySQL with ArchivesSpace. +--- + +Out of the box, the ArchivesSpace distribution runs against an +embedded database, but this is only suitable for demonstration +purposes. When you are ready to starting using ArchivesSpace with +real users and data, you should switch to using MySQL. MySQL offers +significantly better performance when multiple people are using the +system, and will ensure that your data is kept safe. + +ArchivesSpace is currently able to run on MySQL version 5.x & 8.x. + +## Download MySQL Connector + +ArchivesSpace requires the +[MySQL Connector for Java](http://dev.mysql.com/downloads/connector/j/), +which must be downloaded separately because of its licensing agreement. +Download the Connector and place it in a location where ArchivesSpace can +find it on its classpath: + +```shell +$ cd lib +$ curl -Oq https://repo1.maven.org/maven2/com/mysql/mysql-connector-j/9.1.0/mysql-connector-j-9.1.0.jar +``` + +Note that the version of the MySQL connector may be different by the +time you read this. + +## Set up your MySQL database + +Next, create an empty database in MySQL and grant access to a dedicated +ArchivesSpace user. The following example uses username `as` +and password `as123`. + +**NOTE: WHEN CREATING THE DATABASE, YOU MUST SET THE DEFAULT CHARACTER +ENCODING FOR THE DATABASE TO BE `utf8`.** This is particularly important +if you use a MySQL client to create the database (e.g. Navicat, MySQL +Workbench, phpMyAdmin, etc.). + +<!-- This is also true of MySQL 8 in general... --> + +**NOTE: If using AWS RDS MySQL databases, binary logging is not enabled by default and updates will fail.** To enable binary logging, you must create a custom db parameter group for the database and set the `log_bin_trust_function_creators = 1`. See [Working with DB Parameter Groups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html) for information about RDS parameter groups. Within a MySQL session you can also try `SET GLOBAL log_bin_trust_function_creators = 1;` + +```shell +$ mysql -uroot -p + +mysql> create database archivesspace default character set utf8mb4; +Query OK, 1 row affected (0.08 sec) +``` + +If using MySQL 5.7 and below: + +```sql +mysql> grant all on archivesspace.* to 'as'@'localhost' identified by 'as123'; +Query OK, 0 rows affected (0.21 sec) +``` + +If using MySQL 8+: + +```sql +mysql> create user 'as'@'localhost' identified by 'as123'; +Query OK, 0 rows affected (0.08 sec) + +mysql> grant all privileges on archivesspace.* to 'as'@'localhost'; +Query OK, 0 rows affected (0.21 sec) +``` + +Then, modify your `config/config.rb` file to refer to your MySQL +database. When you modify your configuration file, **MAKE SURE THAT YOU +SPECIFY THAT THE CHARACTER ENCODING FOR THE DATABASE TO BE `UTF-8`** as shown +below: + +```ruby +AppConfig[:db_url] = "jdbc:mysql://localhost:3306/archivesspace?user=as&password=as123&useUnicode=true&characterEncoding=UTF-8" +``` + +There is a database setup script that will create all the tables that +ArchivesSpace requires. Run this with: + +```shell +scripts/setup-database.sh # or setup-database.bat under Windows +``` + +You can now follow the instructions in the "Getting Started" section to start +your ArchivesSpace application. + +\*\*NOTE: For MySQL 8. MySQL 8 uses a new method (caching_sha2_password) as the default authentication plugin instead of the old mysql_native_password that MySQL 5.7 and older used. This may require starting a MySQL 8 server with the `--default-authentication-plugin=mysql_native_password` option. You may also be able to change the auth mechanism on a per user basis by logging into mysql and running `ALTER USER 'as'@'localhost' IDENTIFIED WITH mysql_native_password BY 'as123';`. Also be sure to have the LATEST [MySQL Connector for Java](http://dev.mysql.com/downloads/connector/j/) from MySQL in your /lib/ directory for ArchivesSpace. diff --git a/src/content/docs/de/provisioning/newrelic.md b/src/content/docs/de/provisioning/newrelic.md new file mode 100644 index 0000000..49ff283 --- /dev/null +++ b/src/content/docs/de/provisioning/newrelic.md @@ -0,0 +1,40 @@ +--- +title: Application monitoring with New Relic +description: Instructions for how to set up New Relic for application monitoring on ArchivesSpace. +--- + +[New Relic](http://newrelic.com/) is an application performance monitoring tool (amongst other things). + +**To use with ArchivesSpace you must:** + +- Signup for an account at newrelic (there is a free tier and paid plans) +- Edit config.rb to: + - activate the `newrelic` plugin + - add the New Relic license key + - add an application name to identify the ArchivesSpace instance in the New Relic dashboard + +For example, in config.rb: + +```ruby +## You may have other plugins +AppConfig[:plugins] = ['local', 'newrelic'] + +AppConfig[:newrelic_key] = "enteryourkeyhere" +AppConfig[:newrelic_app_name] = "ArchivesSpace" +``` + +- Install the New Relic agent library by initializing the plugin: + +```shell +## For Linux/OSX +$ scripts/initialize-plugin.sh newrelic + +## For Windows +% scripts\initialize-plugin.bat newrelic +``` + +- Start, or restart ArchivesSpace to pick up the configuration. + +Within a few minutes the application should be visible in the New Relic dashboard with data being collected. + +--- diff --git a/src/content/docs/de/provisioning/prefix.md b/src/content/docs/de/provisioning/prefix.md new file mode 100644 index 0000000..d0ddc38 --- /dev/null +++ b/src/content/docs/de/provisioning/prefix.md @@ -0,0 +1,64 @@ +--- +title: Proxy prefix +description: Instructions for serving each user-facing ArchivesSpace application under a prefix rather than as its own subdomain. +--- + +**Important Note: Prefixes do NOT work properly in versions between 2.0.1 and 2.2.2** + +This document describes a simple approach for those wishing to deviate from the recommended +practice of running each user-facing ArchivesSpace application on its own subdomain, and instead +serve each application under a prefix, e.g. + +``` +http://aspace.myarchive.org/staff +http://aspace.myarchive.org/public +``` + +This configuration described in this document is one possible approach, +and to keep things simple the following are assumed: + +- ArchivesSpace is running on a single Linux server +- The server is running the Apache 2.2+ webserver + +Unless otherwise stated, it is assumed that you have root access on +your machines, and all commands are to be run as root (or with sudo). + +## Step 1: Setup proxies in your Apache configuration + +The following edits can be made in the httpd.conf file itself, or in an included file: + +```apache +ProxyPass /staff http://localhost:8080/staff +ProxyPassReverse /staff http://localhost:8080/ +ProxyPass /public http://localhost:8081/public +ProxyPassReverse /public http://localhost:8081/ +``` + +Now restart Apache. + +## Step 2: Install and configure ArchivesSpace + +Follow the instructions in the main README to download and install ArchivesSpace. + +Open the file `archivesspace/config/config.rb` and add the following lines: + +```ruby +AppConfig[:frontend_proxy_url] = 'http://aspace.myarchive.org/staff' +AppConfig[:public_proxy_url] = 'http://aspace.myarchive.org/public' +``` + +(Note: These lines should NOT begin with a '#' character.) + +Start ArchivesSpace. + +## Step 3: (Optional) Lock down ports 8080 and 8081 + +By default, the staff and public applications are accessible on ports 8080 and 8081 + +``` +http://aspace.myarchive.org:8080 +http://aspace.myarchive.org:8081 +``` + +Since these are not the URLs at which users should access the application, you will probably +want to close them off. See README_HTTPS for more information on closing ports using iptables. diff --git a/src/content/docs/de/provisioning/robots.md b/src/content/docs/de/provisioning/robots.md new file mode 100644 index 0000000..702522a --- /dev/null +++ b/src/content/docs/de/provisioning/robots.md @@ -0,0 +1,45 @@ +--- +title: robots.txt +description: Instructions for adding a robots.txt to your ArchivesSpace site. +--- + +The easiest way to add a `robots.txt` to your site is simply create +one in your `/config/` directly. This file will be served as a standard +`robots.txt` file when you start your site. + +If you're not able to do that, you can use a seperate file and your proxy. + +For Apache the config would look like this: + +```apache +<Location "/robots.txt"> + SetHandler None + Require all granted +</Location> +Alias /robots.txt /var/www/robots.txt +``` + +nginx, more like this: + +```nginx +location /robots.txt { + alias /var/www/robots.txt; +} +``` + +You may also add robots meta-tags to your `layout_head.html.erb` to be included in the header area of your site. + +example: + +`<meta name="robots" content="noindex, nofollow">` + +A sensible starting point for a `robots.txt` file looks something like this: + +``` +Disallow: /search* +Disallow: /inventory/* +Disallow: /collection_organization/* +Disallow: /repositories/*/top_containers/* +Disallow: /check_session* +Disallow: /repositories/*/resources/*/tree/* +``` diff --git a/src/content/docs/de/provisioning/solr.md b/src/content/docs/de/provisioning/solr.md new file mode 100644 index 0000000..84845d0 --- /dev/null +++ b/src/content/docs/de/provisioning/solr.md @@ -0,0 +1,205 @@ +--- +title: External Solr +description: Instructions for installing and using external Solr with ArchivesSpace. +--- + +:::note +For ArchivesSpace > 3.1.1, external Solr is **required**. For previous versions it is optional. +::: + +## Supported Solr Versions + +See the [Solr requirement notes](/administration/getting_started#solr) + +## Install Solr + +Refer to the [Solr documentation](https://solr.apache.org/guide/solr/latest/) for instructions on setting up Solr on your server. + +You will download the Solr package and extract it to a folder of your choosing. Do not start Solr +until you have added the ArchivesSpace configuration files. + +**We strongly recommend a standalone mode installation. No support will be provided for Solr +Cloud deployments specifically (i.e. we cannot help troubleshoot Zookeeper).** + +## Create a configset + +Before running Solr you will need to +setup a [configset](https://solr.apache.org/guide/8_10/config-sets.html#configsets-in-standalone-mode). + +### Create a new directory + +#### Linux + +Using the command line: + +```shell +mkdir -p /$path/$to/$solr/server/solr/configsets/archivesspace/conf +``` + +Be sure to replace `/$path/$to/$solr` with your actual Solr location, which might be something like: + +```shell +mkdir -p /opt/solr/server/solr/configsets/archivesspace/conf +``` + +#### Windows + +Right click on your Solr directory and open in Windows Terminal (Powershell). + +``` +mkdir -p .\server\solr\configsets\archivesspace\conf +``` + +You should see something like this in response: + +``` +Directory: C:\Users\archivesspace\Projects\solr-8.10.1\server\solr\configsets\archivesspace +Mode LastWriteTime Length Name +---- ------------- ------ ---- +d----- 10/25/2021 12:15 PM conf +``` + +### Copy the config files + +Copy the ArchivesSpace Solr configuration files from the `solr` directory included +in the zip file release into the `$SOLR_HOME/server/solr/configsets/archivesspace/conf` directory. + +There should be four files: + +- schema.xml +- solrconfig.xml +- stopwords.txt +- synonyms.txt + +```shell +ls .\server\solr\configsets\archivesspace\conf\ + +Directory: C:\Users\archivesspace\Projects\solr-8.10.1\server\solr\configsets\archivesspace\conf + +Mode LastWriteTime Length Name +---- ------------- ------ ---- +-a---- 10/25/2021 12:18 PM 18291 schema.xml +-a---- 10/25/2021 12:18 PM 3046 solrconfig.xml +-a---- 10/25/2021 12:18 PM 0 stopwords.txt +-a---- 10/25/2021 12:18 PM 0 synonyms.txt +``` + +_Note: your exact output may be slightly different._ + +## Setup the environment + +When using Solr v9 or later, the use of [Solr modules](https://solr.apache.org/guide/solr/latest/configuration-guide/solr-modules.html) is required. +We recommend using the environment variable option to specify the modules to use: + +```shell +SOLR_MODULES=analysis-extras +``` + +This environment variable needs to be available to the Solr instance at runtime. + +For instructions on how set an environment variable here are some recommended articles: + +- When using [linux](https://www.freecodecamp.org/news/how-to-set-an-environment-variable-in-linux) +- When using a [mac](https://phoenixnap.com/kb/set-environment-variable-mac) +- When using [windows](https://docs.oracle.com/cd/E83411_01/OREAD/creating-and-modifying-environment-variables-on-windows.htm#OREAD158). Note that on windows, the variable name should be: `SOLR_MODULES` and the variable value: `analysis-extras` + +## Setup a Solr core + +With the `configset` in place run the command to create an ArchivesSpace core: + +```bash +bin/solr start +``` + +Wait for Solr to start (running as a non-admin user): + +```shell +.\bin\solr start +"java version info is 11.0.12" +"Extracted major version is 11" +OpenJDK 64-Bit Server VM warning: JVM cannot use large page memory because it does not have enough privilege to lock pages in memory. +Waiting up to 30 to see Solr running on port 8983 +Started Solr server on port 8983. Happy searching! +``` + +You can check that Solr is running on [http://localhost:8983](http://localhost:8983). + +Now create the core: + +```shell +bin/solr create -c archivesspace -d archivesspace +``` + +You should see confirmation: + +```shell +"java version info is 11.0.12" +"Extracted major version is 11" + +Created new core 'archivesspace' +``` + +In the browser you should be able to access the [ArchivesSpace schema](http://localhost:8983/solr/#/archivesspace/files?file=schema.xml). + +## Disable the embedded server Solr instance (optional <= 3.1.1 only) + +Edit the ArchivesSpace config.rb file: + +```ruby +AppConfig[:enable_solr] = false +``` + +Note that doing this means that you will have to backup Solr manually. + +## Set the Solr url in your config.rb file + +This config setting should point to your Solr instance: + +```ruby +AppConfig[:solr_url] = "http://localhost:8983/solr/archivesspace" +``` + +If you are not running ArchivesSpace and Solr on the same server, update +`localhost` to your Solr address. + +By default, on startup, ArchivesSpace will check that the Solr configuration +appears to be correct and will raise an error if not. You can disable this check +by setting `AppConfig[:solr_verify_checksums] = false` in `config.rb`. + +Please note: if you're upgrading an existing installation of ArchivesSpace to use an external Solr, you will need to trigger a full re-index. +See [Indexes](/administration/indexes) for more details. + +--- + +You can now follow the instructions in the [Getting started](/administration/getting_started) section to start +your ArchivesSpace application. + +--- + +## Upgrading Solr + +If you are using an older version of Solr than is recommended you may need (if called out +in release notes) or want to upgrade. Before performing an upgrade it is recommended that you review: + +- [Solr upgrade notes](https://solr.apache.org/guide/solr/latest/upgrade-notes/solr-upgrade-notes.html) +- [ArchivesSpace's release notes](https://github.com/archivesspace/archivesspace/releases) + +You should also review this document as the installation steps may include +instructions that were not present in the past. For example, from Solr v9 there is a +requirement to use Solr modules with instructions to configure the modules using environment +variables. + +The crucial part will be ensuring that ArchivesSpace's schema is being used for the +ArchivesSpace Solr index. The config setting `AppConfig[:solr_verify_checksums] = true` +will perform a check on startup that confirms this is the case, otherwise ArchivesSpace +will not be able to start up. + +From ArchivesSpace 3.5+ `AppConfig[:solr_verify_checksums]` does not check the +`solrconfig.xml` file. Therefore you can make changes to it without ArchivesSpace failing +on startup. However, for an upgrade you will want to at least compare the ArchivesSpace +`solrconfig.xml` to the one that is in use in case there are changes that need to be made to +work with the upgraded-to version of Solr. For example the ArchivesSpace Solr v8 `solrconfig.xml` +will not work as is with Solr v9. + +After upgrading Solr you should trigger a full re-index. Instructions for this are in +[Indexes](/administration/indexes). diff --git a/src/content/docs/de/provisioning/tuning.md b/src/content/docs/de/provisioning/tuning.md new file mode 100644 index 0000000..b36f9f2 --- /dev/null +++ b/src/content/docs/de/provisioning/tuning.md @@ -0,0 +1,51 @@ +--- +title: Performance tuning +description: Guidance for performance tuning of the ArchivesSpace stack. +--- + +ArchivesSpace is a stack of web applications which may require special tuning in order to run most effectively. This is especially the case for institutions with lots of data or many simultaneous users editing metadata. +Keep in mind that ArchivesSpace can be hosted on multiple server, either in a [multitenant setup](/provisioning/clustering) or by deploying the various applications ( i.e. backend, frontend, public, solr, & indexer ) on separate servers. + +## Application Settings + +The application itself can tuned in numerous ways. It’s a good idea to read the [configuration documentation](/customization/configuration), as there are numerous settings that can be adjusted to fit your needs. + +An important thing to note is that since ArchivesSpace is a Java application, it’s possible to set the memory allocations used by the JVM. There are numerous articles on the internet full of information about what the optimal settings are, which will depend greatly on the load your server is experiencing and the hardware. It’s a good idea to monitor the application and ensure that it’s not hitting the top limits what you’ve set as the heap. + +These settings are: + +- ASPACE_JAVA_XMX : Maximum heap space ( maps to Java’s Xmx, default "Xmx1024m" ) +- ASPACE_JAVA_XSS : Thread stack size ( maps to Xss, default "Xss2m" ) +- ASPACE_GC_OPTS : Options used by the Java garbage collector ( default : "-XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:NewRatio=1" ) + +To modify these settings, Linux users can either export an environment variable ( e.g. $ export ASPACE_JAVA_XMX="Xmx2048m" ) or edit the archivesspace.sh startup script and modify the defaults. + +Windows users must edit the archivesspace.bat file. + +If you're having trouble with errors like `java.lang.OutOfMemoryError` try doubling the `ASPACE_JAVA_XMX`. On Linux you can do this either by setting an environment variable like `$ export ASPACE_JAVA_XMX="Xmx2048m"` or by editing archivsspace.sh: + +```shell +if [ "$ASPACE_JAVA_XMX" = "" ]; then + ASPACE_JAVA_XMX="-Xmx2048m" +fi +``` + +For Windows, you'll change archivesspace.bat: + +```shell +java -Darchivesspace-daemon=yes %JAVA_OPTS% -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:NewRatio=1 -Xss2m -X +mx2048m -Dfile.encoding=UTF-8 -cp "%GEM_HOME%\gems\jruby-rack-1.1.12\lib\*;lib\*;launcher\lib\*!JRUBY!" org.jruby.Main "la +uncher/launcher.rb" > "logs/archivesspace.out" 2>&1 +``` + +**NOTE: THE APPLICATION WILL NOT USE THE AVAILABLE MEMORY UNLESS YOU SET THE MAXIMUM HEAP SIZE TO ALLOCATE IT** For example, if your server has 4 gigs of RAM, but you haven’t adjusted the ArchivesSpace settings, you’ll only be using 1 gig. + +## MySQL + +The ArchivesSpace application can hit a database server rather hard, since it’s a metadata rich application. There are many articles online about how to tune a MySQL database. A good place to start is try something like [MySQL Tuner](http://mysqltuner.com/) or [Tuning Primer](https://rtcamp.com/tutorials/mysql/tuning-primer/) which can give good hints on possible tweaks to make to your MySQL server configuration. + +Keep a close eye on the memory available to the server, as well as your InnoDB buffer pool. + +## Solr + +The internet is full of many suggestions on how to optimize a Solr index. [Running an external Solr index](/provisioning/solr) can be beneficial to the performance of ArchivesSpace, since that moves the index to its own server. diff --git a/src/content/docs/de/release-notes/v4.0.0.md b/src/content/docs/de/release-notes/v4.0.0.md new file mode 100644 index 0000000..3324b7b --- /dev/null +++ b/src/content/docs/de/release-notes/v4.0.0.md @@ -0,0 +1,89 @@ +--- +title: v4.0.0 +--- + +## ArchivesSpace v4.0.0 Release Summary + +Major technical infrastructure upgrades and user interface improvements characterize this release. Key changes include: + +## Breaking Changes + +- **Breaking change**: [OAI identifiers now use colon separator between the namespace and identifier](#api-and-integration-updates) +- **Breaking change**: [Solr 9 now required](#major-infrastructure-updates) +- **Breaking change**: [the Sequence module has been removed from core ArchivesSpace](#plugins-and-configuration) + +## Major Infrastructure Updates + +- **Breaking change**: Solr 9 now required +- Upgraded to newer versions of: + - Bootstrap (4.3) + - jQuery (3.7.0) + - Rails (6.1.6) + - JRuby (9.3.x.x) + - Nokogiri (1.13.10) + - Sequel (5.9.0) +- Frontend and public development web server migrated from Jetty to Puma (6.4.2) +- Staff application CSS migrated from Less to Sass +- Java 8 no longer supported - requires Java 11 or 17 +- Docker now supported as recommended deployment method + +## Public User Interface Improvements + +- Collection organization sidebar can now be configured for left/right positioning in config.rb +- New information and options for large finding aids + - Displays percentage of loaded records in infinite scroll + - Option to load all children for a resource at once (vs infinite scroll) +- Search terms now highlighted in results +- Fixed bug causing extra lines in notes display +- Change PDF label from "Print" to "Download PDF" +- PDF uses Kurinto fonts by default +- Improved hyperlink display in classification descriptions + +## Staff Interface Enhancements + +- Bulk updater plugin now part of core application +- New ability to duplicate full resource or archival object records +- Enhanced spreadsheet importers + - Added new fields for digital objects to bulk Digital Object spreadsheet + - Location imports can include an owner repository + - Archival Object CSV imports now respect publication status + - New option to download partially completed digital object spreadsheet template +- Fixed agent merge preview page +- Improved staff plugins dropdown in repository settings +- Fixes to the Rapid Data Entry modal +- Fixed tooltip bugs +- Improved Jobs status layouts + +## EAD Export Changes + +- More fields have special character escaped +- Removed commas and period from langmaterial notes +- Leading XML tags in Revision Description will no longer cause invalid XML + +## Documentation and Testing + +- Launched new technical documentation site at docs.archivesspace.org +- Ported all Selenium tests to Capybara +- Added functionality for test failure screenshots + +## API and Integration Updates + +- **Breaking change**: OAI identifiers now use colon separator between the namespace and identifier + +## Security and Administration + +- New config.rb option to allow users with the Administrator role to access the system information page +- Added config.rb option for favicon display +- PUI PDFs will now include clearer error messages when generation fails +- Enhanced bulk import/update capabilities with new configuration options + +## Plugins and Configuration + +- **Breaking change**: the Sequence module has been removed from core ArchivesSpace + +## Community Contributions + +- 76 community contributions accepted +- 134 Pull Requests merged +- 146 Jira Tickets closed +- Contributions from multiple community members and organizations diff --git a/src/content/docs/es/404.md b/src/content/docs/es/404.md new file mode 100644 index 0000000..976d1cc --- /dev/null +++ b/src/content/docs/es/404.md @@ -0,0 +1,9 @@ +--- +title: '404' +editUrl: false +lastUpdated: false +tableOfContents: false +hero: + title: '404' + tagline: Page not found. Check the URL or try searching for what you were looking for. +--- diff --git a/src/content/docs/es/about/authoring.md b/src/content/docs/es/about/authoring.md new file mode 100644 index 0000000..3b2b1c8 --- /dev/null +++ b/src/content/docs/es/about/authoring.md @@ -0,0 +1,308 @@ +--- +title: Authoring content +description: This page outlines best practices for updating and writing markdown files for the tech-docs repository. +--- + +The Tech Docs site contains two types of content--documentation pages and blog posts. Both content types are written in [Markdown](https://en.wikipedia.org/wiki/Markdown) and define page-specific details as [yaml](https://yaml.org/) key:value pairs. + +Tech Docs uses [GitHub-flavored Markdown](https://github.github.com/gfm/), a variant of Markdown syntax, and [SmartyPants](https://daringfireball.net/projects/smartypants/), a typographic punctuation plugin. These tools provide authors niceties like generating clickable links from text, creating lists and tables, formatting for quotations and em-dashes, and more. + +## Where pages go + +### Documentation pages + +Documentation pages live under `src/content/docs/`. Each page is a `.md` or `.mdx` file. The URL path is `/` plus the file path relative to that directory, without the extension—for example, `src/content/docs/architecture/public.md` is served at `/architecture/public`. Nested folders add segments to the path. + +### Blog + +Blog posts live under `src/content/blog/` as `.md` or `.mdx` files. The URL is `/blog/` plus the path to the file relative to that folder, without the extension—for example, `src/content/blog/v4-2-0-release-candidate.md` is served at `/blog/v4-2-0-release-candidate`. Nested folders add path segments to the URL. + +Valid frontmatter and body content are required for the site to be built and published. + +## Markdown + +Common use of Markdown throughout Tech Docs includes: + +- [headings](#headings) +- [links](#links) +- [emphasizing text](#emphasizing-text) +- [paragraphs](#paragraphs) +- [lists](#lists) +- [code examples](#code-examples) +- [diagrams](#diagrams) +- [asides](#asides) +- [images](#images) + +### Headings + +Start a new line with between 2 and 6 `#` symbols, followed by a single space, and then the heading text. + +```md +## Example second-level heading +``` + +The number of `#` symbols corresponds to the heading level in the document hierarchy. **The first heading level is reserved for the page title** (available in the page [YAML frontmatter](#yaml-frontmatter)). Therefore the first _authored_ heading on every page should be a second level heading (`##`). + +:::note[Second level heading requirement] +Authored headings should start at the second level (`##`) on every page, since the first level (`#`) is reserved for the page title which is machine-generated. +::: + +```md +<!-- example.md --> + +## Second level heading + +Notice the page starts with a second level heading. + +Notice the blank lines above and below each heading. + +### Third level heading + +This is demo text under the Third level heading section. + +#### Fourth level heading + +##### Fifth level heading + +###### Sixth and final level heading +``` + +### Links + +Create a link by wrapping the link text in brackets (`[ ]`) immediately followed by the external link URL, or internal link path, wrapped in parentheses (`( )`). + +```md +[text](URL or path) +``` + +Be sure not to include any space between the wrapped text and URL. + +```md +<!-- example.md --> + +See the [TechDocs source code](https://github.com/archivesspace/tech-docs). +``` + +#### In documentation pages + +##### To other pages + +When linking to another Tech Docs documentation page, start with a forward slash (`/`), followed by the location of the page as found in the `src/content/docs/` directory, and omit the file extension (`.md`). + +```md +✅ [Public user interface](/architecture/public) + +❌ [Public user interface](architecture/public) +❌ [Public user interface](./architecture/public) +❌ [Public user interface](../architecture/public) +❌ [Public user interface](/architecture/public.md) +``` + +:::note[Internal link requirements] +Links to other Tech Docs documentation pages should: + +1. start with a forward slash (`/`) +2. reflect the location of the page as found in `src/content/docs/` +3. not include the file extension (`.md`) + +::: + +##### Within a page + +Starlight provides [automatic heading anchor links](https://starlight.astro.build/guides/authoring-content/#automatic-heading-anchor-links). To link to a section within a page, use the `#` symbol followed by the HTML `id` of the relevant section heading. + +```md +<!-- src/content/docs/about/authoring.md --> + +See the [Links](#links) section on this page. + +See the [Public configuration options](/architecture/public#configuration). +``` + +:::tip +A section heading's `id` is usually the same text string as the heading itself, but in all lowercase letters and with all single spaces converted to single hyphens. See the actual HTML `id` by right clicking on the heading to "inspect" it. +::: + +#### In blog posts + +When you write the body of a blog post, links to documentation pages use the same pattern as [in documentation pages](#to-other-pages): a leading `/` and the path under `src/content/docs/` without `.md`, for example `[Public user interface](/architecture/public)`. + +Links to another blog post use `/blog/` plus that post’s path under `src/content/blog/` without the extension—the same shape as its public URL (see [Blog](#blog) under [Where pages go](#where-pages-go)). For example, `src/content/blog/v4-2-0-release-candidate.md` is linked as `[v4.2.0 release candidate](/blog/v4-2-0-release-candidate)`. Nested folders add segments, for example `/blog/releases/v4-2-0` for `src/content/blog/releases/v4-2-0.md`. + +### Emphasizing text + +Wrap text to be emphasized with `_ ` for italics, `**` for bold, and `~~` for strikethrough. + +```md +<!-- example.md --> + +_Italicized_ text + +**Bold** text + +**_Bold and italicized_** text + +~~Strikethrough~~ text +``` + +### Paragraphs + +Create paragraphs by leaving a blank line between lines of text. + +```md +<!-- example.md --> + +This is one paragraph. + +This is another paragraph. +``` + +### Lists + +Precede each line in a list with a dash (`-`) for a bulleted list, or a number followed by a period (`1.`) for an ordered list. + +```md +<!-- example.md --> + +- Resource +- Digital Object +- Accession + +1. Accession +2. Digital Object +3. Resource +``` + +### Code examples + +Wrap inline code with a single backtick (`` ` ``). + +Wrap code blocks with triple backticks (` ``` `), also known as a "code fence", placed just above and below the code. Append the name of the code's language or its file extension to the first set of backticks for syntax highlighting. + +````md +<!-- example.md --> + +The `JSONModel` class is central to ArchivesSpace. + +```ruby +def h(str) + ERB::Util.html_escape(str) +end +``` +```` + +### Diagrams + +Tech Docs supports [Mermaid](https://mermaid.js.org/) diagrams in both documentation pages and blog posts. + +Use a fenced code block with `mermaid` as the language: + +````md +```mermaid +flowchart TD + A[Staff user edits record] --> B[Indexer updates Solr] + B --> C[Updated record in PUI] +``` +```` + +Rendered example: + +```mermaid +flowchart TD + A[Staff user edits record] --> B[Indexer updates Solr] + B --> C[Updated record in PUI] +``` + +### Asides + +Asides are useful for highlighting secondary or marketing information. + +Wrap content in a pair of triple colons (`:::`) and append one of the aside types (ie: `note`) to the first set of colons. The aside types are `note`, `tip`, `caution`, and `danger`, each of which have their own set of colors and icon. Customize the title by wrapping text in brackets (`[ ]`) placed after the note type. + +```md +<!-- example.md --> + +:::tip +Become an ArchivesSpace member today! 🎉 +::: + +:::note[Some custom title] + +### Markdown is supported in asides + +![Pic alt text](../../../../images/example.jpg) + +Lorem ipsum dolor sit amet consectetur, adipisicing elit. +::: +``` + +:::note +Asides are a custom Markdown feature provided by the underlying [Starlight framework](https://starlight.astro.build/guides/authoring-content/#asides) that builds the Tech Docs. +::: + +:::tip[Customize the aside title] +Customize the the aside title by wrapping text in brackets (`[ ]`) after the note type, in this case `tip`. +::: + +### Images + +Show an image using an exclamation point (`!`), followed by the image's [alt text](https://en.wikipedia.org/wiki/Alt_attribute) (a brief description of the image) wrapped in brackets (`[ ]`), followed by the external URL, or internal path, wrapped in parentheses (`( )`). + +```md +<!-- example.md --> + +![A dozen Krispy Kreme donuts in a box](https://example.com/donuts.jpg) + +![The ArchivesSpace logo](../../../../images/logo.svg) +``` + +:::note[Put images in `src/images`] +All internal images belong in the `src/images` directory. The relative path to images from this file is `../../../images`. +::: + +## YAML frontmatter + +Each content file starts with [YAML](https://yaml.org/) frontmatter: metadata in a block wrapped in triple dashes (`---`). Use the templates below so every field we rely on is set explicitly. For more on how the site build system reads these values, see [Documentation content collection and schema](/about/development#documentation-content-collection-and-schema) and [Blog content collection and schema](/about/development#blog-content-collection-and-schema) on the Development page. + +### Documentation pages + +```md +--- +title: Using MySQL +description: Instructions for how to set up MySQL with ArchivesSpace. +--- +``` + +- **`title`** — Page title shown in the layout, browser tab, and metadata. +- **`description`** — Short summary used for SEO, search, and social previews. + +### Blog posts + +```md +--- +title: v4.2.0 Release Candidate +metaDescription: Early access to ArchivesSpace v4.2.0-RC1 is now available. +teaser: ArchivesSpace <a href="https://github.com/archivesspace/archivesspace/releases/tag/v4.2.0-RC1">v4.2.0-RC1</a> has landed for early testing. +pubDate: 2026-03-20 +authors: + - Pat Doe +updatedDate: 2026-03-21 +--- +``` + +- **`title`** — Post headline on the post page and on the blog index. +- **`metaDescription`** — Short summary for page metadata (SEO) and for the index card when `teaser` is omitted. +- **`teaser`** — Text or HTML for the blog index card (links and light markup are common here). +- **`pubDate`** — Publication date; posts are ordered by this value, newest first. Use an ISO-style date (`YYYY-MM-DD`). +- **`authors`** — List of author names, shown comma-separated on the index and post page. +- **`updatedDate`** — Last-updated date in the same `YYYY-MM-DD` form when the post is revised after publication. + +## Image files + +All internal image files used in Tech Docs content should go in the `src/images` directory, located at the root of this project. + +## Writing conventions + +- Plugins, not plug-ins +- Titles are sentence-case ("Application monitoring with New Relic") +- Documentation page titles prefer '-ing' verb forms ("Using MySQL", "Serving over HTTPS") diff --git a/src/content/docs/es/about/development.md b/src/content/docs/es/about/development.md new file mode 100644 index 0000000..40771f9 --- /dev/null +++ b/src/content/docs/es/about/development.md @@ -0,0 +1,318 @@ +--- +title: Development +description: This page describes how to set up the tech-docs repostory, build the website, update dependencies, and run tests +# This is the last page in the sidebar, so point to Home next instead of +# the Help Center which comes after this page in the sidebar +next: + link: / + label: Home +--- + +Tech Docs is a [Node.js](https://nodejs.org) application, built with [Astro](https://astro.build/) and its [Starlight](https://starlight.astro.build/) documentation site framework. The source code is hosted on [GitHub](https://github.com/archivesspace/tech-docs). The site is statically built and (temporarily) hosted via [Cloudflare Pages](https://pages.cloudflare.com/). Content is written in [Markdown](/about/authoring#markdown). When the source code changes, a new set of static files are generated and published shortly after. + +## Dependencies + +Tech Docs depends on the following open source software (see `.nvmrc` and `package.json` for versions): + +1. [Node.js](https://nodejs.org) - JavaScript development and build environment; the version noted in `.nvmrc` reflects the default version of Node.js in the Cloudflare Pages build image +2. [Astro](https://astro.build/) - Static site generator conceptually based on "components" (React, Vue, Svelte, etc.) rather than "templates" (Jekyll, Handlebars, Pug, etc.) + 1. [Starlight](https://starlight.astro.build/) - Astro plugin and theme for documentation websites + 2. [Sharp](https://sharp.pixelplumbing.com/) - Image transformation library used by Astro +3. [Cypress](https://www.cypress.io/) - End-to-end testing framework +4. [Stylelint](https://stylelint.io/) - CSS linter used locally in text editors and remotely in [CI](#cicd) for testing + 1. [stylelint-config-recommended](https://github.com/stylelint/stylelint-config-recommended) - Base set of lint rules + 2. [postcss-html](https://github.com/ota-meshi/postcss-html) - PostCSS syntax for parsing HTML (and HTML-like including .astro files) + 3. [stylelint-config-html](https://github.com/ota-meshi/stylelint-config-html) - Allows Stylelint to parse .astro files +5. [Prettier](https://prettier.io/) - Source code formatter used locally in text editors and remotely in [CI](#cicd) for testing + 1. [prettier-plugin-astro](https://github.com/withastro/prettier-plugin-astro) - Allows Prettier to parse .astro files via the command line + +## Local development + +Run Tech Docs locally by cloning the Tech Docs repository, installing project dependencies, and spinning up a development server: + +```sh +# Requires git and Node.js + +# Clone Tech Docs and move to it +git clone https://github.com/archivesspace/tech-docs.git +cd tech-docs + +# Install dependencies +npm install + +# Run dev server +npm start +``` + +Now go to [localhost:4321](http://localhost:4321) to see Tech Docs running locally. Changes to the source code will be immediately reflected in the browser. + +### Building the site + +Building the site creates a set of static files, found in `dist` after build, that can be served locally or deployed to a server. Sometimes building the site surfaces errors not found in the development environment. + +```sh +# Build the site and output it to dist/ +npm run build +``` + +:::tip +Serve the built output by running `npm run preview` after a build. +::: + +### Available `npm` scripts + +The following scripts are made available via `package.json`. Invoke any script on the command line from the project root by prepending it with the `npm run` command, ie: `npm run start`. + +- `start` -- run Astro dev server +- `build` -- build Tech Docs for production +- `preview` -- serve the static build +- `astro` -- get Astro help +- `test:dev` -- run tests in development mode +- `test:prod` -- run tests in production mode +- `test` -- defaults to run tests in production mode +- `prettier:check` -- check formatting with Prettier +- `prettier:fix` -- fix possible format errors with Prettier +- `stylelint:check` -- lint CSS with Stylelint +- `stylelint:fix` -- fix possible CSS lint errors with Stylelint + +## Documentation pages + +Documentation pages are implemented with Starlight’s `docs` content collection. Source files are in `src/content/docs/`, and Starlight generates their routes as part of the normal Astro static build output (no separate docs build step). Sidebar hierarchy is configured in `src/siteNavigation.json`. For copy-paste templates and short author-facing field guidance, see [YAML frontmatter](/about/authoring#yaml-frontmatter). + +### Adding documentation pages + +To add a new documentation page: + +1. Create a Markdown file in the appropriate docs section directory under `src/content/docs/`. +2. Add that page to `src/siteNavigation.json` in the correct section and in the correct order so it appears in the sidebar navigation as desired. +3. If the new page becomes the first page in its section, update the corresponding homepage hero link in `src/components/HomePage.astro` so the section link points to the new first page. + +### Legacy `index.md` pages + +Some section directories still contain legacy `index.md` pages from the old Tech Docs site. Those pages can still be routed (for example `/architecture` and `/architecture/index`), but they are not included in the sidebar since they are not listed in `src/siteNavigation.json`. + +### Documentation content collection and schema + +In `src/content.config.ts`, the `docs` collection uses `docsLoader()` and [Starlight’s frontmatter schema](https://starlight.astro.build/reference/frontmatter/) via `docsSchema()`, extended with `issueUrl` and `issueText`. Frontmatter is validated at build time. Starlight requires a `title`; other keys are optional unless your page has a specific need. + +| Field | Required | Purpose | +| ----------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `title` | Yes | Page title in the layout, browser tab, and metadata. | +| `description` | No | Short summary for SEO, search, and social previews. Most pages set this; it is omitted on a few pages (for example [Staff interface](/architecture/frontend), [404](/404)). | +| `slug` | No | Overrides the URL segment instead of deriving it from the file path. | +| `editUrl` | No | Overrides the “Edit page” URL, or `false` to hide the link (for example on [404](/404)). | +| `head` | No | Extra tags for the document head (meta, link, custom title, etc.). | +| `tableOfContents` | No | Table of contents: `false` to hide, or `{ minHeadingLevel, maxHeadingLevel }` to tune range. | +| `template` | No | Starlight layout template (for example `splash`). | +| `hero` | No | Hero area for splash-style pages (`title`, `tagline`, optional `image`, `actions`, etc.). | +| `banner` | No | Optional banner above the page content. | +| `lastUpdated` | No | Override the displayed last-updated date, or `false` to hide it. | +| `prev` | No | Previous pagination link: `false`, a string label, or `{ link, label }`. | +| `next` | No | Next pagination link: `false`, a string label, or `{ link, label }`. For example, [Development](/about/development) sets this so “next” goes to Home instead of the external Help Center entry after it in the sidebar. | +| `pagefind` | No | Set `false` to omit the page from the Pagefind index. | +| `draft` | No | When `true`, exclude the page from production builds. | +| `sidebar` | No | Per-page sidebar label, order, badge, `hidden`, or link `attrs`. The main sidebar structure is configured in `src/siteNavigation.json`. | +| `issueUrl` | No | URL for the footer “report an issue” link, or `false` to hide it. Defaults in `src/content.config.ts` when omitted; authors may set explicitly (see [YAML frontmatter](/about/authoring#yaml-frontmatter)). | +| `issueText` | No | Label text for that footer link. Defaults in `src/content.config.ts` when omitted; authors may set explicitly (see [YAML frontmatter](/about/authoring#yaml-frontmatter)). | + +### Documentation routes + +- URLs are derived from file paths in `src/content/docs/` unless `slug` is set in frontmatter. +- Previous/next pagination is derived from sidebar order unless `prev`/`next` are overridden in frontmatter. + +### Documentation UI components + +| Area | Location | +| -------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- | +| Sidebar hierarchy and grouping | `src/siteNavigation.json` | +| Default docs page title rendering | `src/components/CustomPageTitle.astro` (falls back to Starlight’s default `PageTitle` for non-blog routes) | +| Footer metadata/navigation (edit link, issue link, etc.) | `src/components/overrides/Footer.astro`, `src/components/overrides/EditLink.astro`, `src/components/IssueLink.astro` | + +### Documentation tests + +Documentation-page behavior is covered in Cypress, mainly `cypress/e2e/content-pages.cy.js` (sidebar, table of contents, footer metadata links, and pagination). + +## Blog + +The [blog](/blog) is implemented as an Astro content collection alongside the docs collection. Post source files are in `src/content/blog/`; routes live under `src/pages/blog/`. There is no separate blog build step—blog pages are part of the normal Astro static output, and site search ([Search](#search)) indexes them like other HTML. For where to put files and example frontmatter, see [Authoring content](/about/authoring#where-pages-go) and [YAML frontmatter](/about/authoring#yaml-frontmatter). + +### Adding blog posts + +To add a new blog post, create a new Markdown file in `src/content/blog/` with the required frontmatter fields (`title`, `metaDescription`, `pubDate`, and `authors`). + +Optional fields (`teaser` and `updatedDate`) can also be added as needed. No `src/siteNavigation.json` changes are required for blog posts; valid files in the collection are included automatically when the site builds. + +### Blog content collection and schema + +The `blog` collection is registered in `src/content.config.ts` with a Zod schema. Frontmatter is validated at build time. Adding or renaming frontmatter fields requires updating that schema and every consumer of `entry.data` (blog pages, middleware, and tests). + +| Field | Required | Purpose | +| ----------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `title` | Yes | Post headline on the post page and index card. May include HTML for display; the document `<title>` and prev/next pagination labels **strip HTML** from `title`. | +| `metaDescription` | Yes | Short summary for page meta description (SEO). Used as the index teaser text when `teaser` is omitted. | +| `teaser` | No | HTML or plain text for the blog index card (`set:html`). Prefer this for links or light HTML on the index; plain text in `title` is safest where tab titles and pagination matter. | +| `pubDate` | Yes | Publication date; posts are sorted by this field, newest first. Parsed from frontmatter and formatted for display in **UTC** on the index and post header. | +| `authors` | Yes | Array of author display names; shown comma-separated on the index and post page. | +| `updatedDate` | No | Optional revision date (`YYYY-MM-DD`). Stored in frontmatter but **not shown in the UI** today; useful for future display or consistency with the authoring template. | + +### Blog routes + +- `src/pages/blog/index.astro` — `/blog` index; loads posts, sorts by `pubDate` descending, passes data to the index UI. +- `src/pages/blog/[id].astro` — individual posts; `getStaticPaths` comes from the collection, so new valid posts appear on the next build. + +### Blog route middleware + +`src/blogRouteData.js` is Starlight route middleware for blog routes. It injects `pubDate`, `authors`, and `postTitle` for post pages and sets prev/next pagination (older post as “Previous,” newer as “Next”). Pagination labels use titles with HTML stripped. + +### Blog UI components + +| Area | Location | +| ------------------------------------ | ----------------------------------------------------------------------------- | +| Index list and cards | `src/components/BlogIndex.astro` | +| Index page title | `src/components/BlogIndexTitleHeader.astro` | +| Post title, date, authors, back link | `src/components/BlogPostTitleHeader.astro`, `src/components/BackToBlog.astro` | +| Default vs blog title | `src/components/CustomPageTitle.astro` | +| Header “Blog” link | `src/components/overrides/Header.astro` | +| Blog layout / sidebar behavior | `src/components/overrides/PageFrame.astro` | + +### Blog tests + +End-to-end coverage is in `cypress/e2e/blog.cy.js`. Update these tests when you change blog markup, URLs, or visible behavior. + +## Search + +Site search is a [Starlight feature](https://starlight.astro.build/guides/site-search/): + +> By default, Starlight sites include full-text search powered by [Pagefind](https://pagefind.app/), which is a fast and low-bandwidth search tool for static sites. +> +> No configuration is required to enable search. Build and deploy your site, then use the search bar in the site header to find content. + +:::note +Search only runs in production builds not in the dev server. +::: + +## Theme customization + +Starlight can be customized in various ways, including: + +- [Settings](https://starlight.astro.build/guides/customization/) -- see `astro.config.mjs` +- [CSS](https://starlight.astro.build/guides/css-and-tailwind/) -- see `src/styles/custom.css` +- [Components](https://starlight.astro.build/guides/overriding-components/) -- see `src/components` + +## Static assets + +### Images + +Most image files should be stored in `src/images`. This allows for [processing by Astro](https://docs.astro.build/en/guides/images/) which includes performance optimizations. + +Images that should not be processed by Astro, like favicons, should be stored in `public`. + +:::note[Use `src/images` for all content images] +Put all images used in Tech Docs content in `src/images`. +::: + +### The `public` directory + +Files placed in `public` are not processed by Astro. They are copied directly to the output and made available from the root of the site, so `public/favicon.svg` becomes available at `docs.archivesspace.org/favicon.svg`, while `public/example/slides.pdf` becomes available at `docs.archivesspace.org/example/slides.pdf`. + +## Mermaid diagrams + +Tech Docs supports Mermaid diagrams in both docs and blog content (for authoring syntax, see [Authoring content](/about/authoring#diagrams)). Mermaid is a text-to-diagram tool: authors write diagram definitions in a code fence, and Mermaid turns that text into SVG diagrams in the browser. This differs from regular fenced code blocks that Starlight renders with [Expressive Code](https://expressive-code.com/) as static syntax-highlighted code snippets. + +### Implementation + +1. Runtime logic lives in `src/lib/mermaid.ts`. +2. The runtime is loaded by the Starlight page frame override in `src/components/overrides/PageFrame.astro`. +3. Mermaid fences are post-processed at runtime and rendered as SVG diagrams. + +### Theme behavior + +- Mermaid theme is derived from the site theme (`data-theme` on `<html>`): + - dark mode => Mermaid `dark` + - non-dark modes => Mermaid `default` +- A `MutationObserver` in `src/lib/mermaid.ts` watches for `data-theme` changes and re-renders existing Mermaid diagrams so colors update after theme toggles. +- Mermaid text color is explicitly set in `initializeMermaidRuntime()` bor improved accessibility over its default styles: + - dark mode text: `#fff` + - light mode text: `#000` + +### Maintenance notes + +- If Starlight/Expressive Code markup changes in a future upgrade, update Mermaid selectors/parsing in `src/lib/mermaid.ts` (especially `pre[data-language="mermaid"]` and `.ec-line .code`). +- If layout-level script loading changes, keep `src/components/overrides/PageFrame.astro` loading `src/lib/mermaid.ts` on pages where markdown content appears. +- Keep Cypress coverage updated in `cypress/e2e/mermaid.cy.js` when Mermaid rendering behavior or markup changes. + +## Update npm dependencies + +Run the following commands locally to update the npm dependencies, then push the changes upstream. + +```sh +# List outdated dependencies +npm outdated + +# Update dependencies +npm update +``` + +## Import aliases + +Astro supports [import aliases](https://docs.astro.build/en/guides/imports/#aliases) which provide shortcuts to writing long relative import paths. + +```astro title="src/components/overrides/Example.astro" del="../../images" ins="@images" +--- +import relativeA from '../../images/A_logo.svg' // no alias +import aliasA from '@images/A_logo.svg' // alias +--- +``` + +## Sitemap + +Starlight has built-in [sitemap support](https://starlight.astro.build/guides/customization/#enable-sitemap) which is enabled via the top-level `site` key in `astro.config.mjs`. This key generates `/sitemap-index.xml` and `/sitemap-0.xml` when Tech Docs is [built](#building-the-site), and adds the sitemap link to the `<head>` of every page. `public/robots.txt` also points to the sitemap. + +## Testing + +### End-to-end + +Tech Docs uses [Cypress](https://www.cypress.io/) for end-to-end testing customizations made to the underlying Starlight framework and other project needs. End-to-end tests are located in `cypress/e2e`. + +Run the Cypress tests locally by first building and serving the site: + +```sh +# Build the site +npm run build + +# Serve the build output +npm run preview +``` + +Then **in a different terminal** initiate the tests: + +```sh +# Run the tests +npm test +``` + +### Code style + +Nearly all files in the Tech Docs code base get formatted by [Prettier](https://prettier.io/) to ensure consistent readability and syntax. Run Prettier locally to find format errors and automatically fix them when possible: + +```sh +# Check formatting of .md, .css, .astro, .js, .yml, etc. files +npm run prettier:check + +# Fix any errors that can be overwritten automatically +npm run prettier:fix +``` + +All CSS in .css and .astro files are linted by [Stylelint](https://stylelint.io/) to help avoid errors and enforce conventions. Run Stylelint locally to find lint errors and automatically fix them when possible: + +```sh +# Check all CSS +npm run stylelint:check + +# Fix any errors that can be overwritten automatically +npm run stylelint:fix +``` + +### CI/CD + +Before new changes are accepted into the code base, the [end-to-end](#end-to-end) and [code style](#code-style) tests need to pass. Tech Docs uses [GitHub Actions](https://docs.github.com/en/actions) for its continuous integration and continuous delivery (CI/CD) platform, which automates the testing and deployment processes. The tests are defined in yaml files found in `.github/workflows/` and are run automatically when new changes are proposed. diff --git a/src/content/docs/es/administration/backup.md b/src/content/docs/es/administration/backup.md new file mode 100644 index 0000000..688cf61 --- /dev/null +++ b/src/content/docs/es/administration/backup.md @@ -0,0 +1,160 @@ +--- +title: Backup and recovery +description: Steps, commands, and advice for setting up your ArchivesSpace MySQL database and Solr index. Backups will ensure recovery in case of error or failure. +--- + +## Using the docker configuration package + +### Database backups + +The [Docker configuration package](/administration/docker) includes a mechanism that performs periodic backups of your MySQL database, +using: [databacker/mysql-backup](https://github.com/databacker/mysql-backup). It is by default configured to perform +a dump every two hours. See [configuration](https://github.com/databacker/mysql-backup/blob/master/docs/configuration.md) for more options. + +The automatically created backups are located in the [`backups` directory](/administration/docker/) of the docker configuration package. + +#### When using Docker + +You can explicitly create a dump of your dockerized database while the docker containers are running using the command on your host system shell: + +```shell +docker exec mysql mysqldump -u root -p123456 archivesspace | gzip > /tmp/db.$(date +%F.%H%M%S).sql.gz +``` + +#### When using Docker Desktop + +You can explicitly create a dump of your dockerized database while the docker containers are running using the command on the "Exec" tab of your mysql container: + +```shell +docker exec mysql mysqldump -u root -p123456 archivesspace | gzip > /tmp/db.$(date +%F.%H%M%S).sql.gz +``` + +You can then export the created database dump from the `/tmp` directory of your mysql container using the "Files" tab. + +## Managing your own backups + +Performing regular backups of your MySQL database is critical. ArchivesSpace stores +all of your records data in the database, so as long as you have backups of your +database then you can always recover from errors and failures. + +If you are running MySQL, the `mysqldump` utility can dump the database +schema and data to a file. It's a good idea to run this with the +`--single-transaction` option to avoid locking your database tables +while your backups run. It is also essential to use the `--routines` +flag, which will include functions and stored procedures in the +backup. The `mysqldump` utility is widely used, and there are many tutorials +available. As an example, something like this in your `crontab` would backup your +database twice daily: + +```shell +# Dump archivesspace database 6am and 6pm +30 06,18 * * * mysqldump -u as -pas123 archivesspace | gzip > ~/backups/db.$(date +%F.%H%M%S).sql.gz +``` + +You should store backups in a safe location. + +If you are running with the demo database (NEVER run the demo database in production), +you can create periodic database snapshots using the following configuration settings: + +```ruby +# In this example, we create a snapshot at 4am each day and keep +# 7 days' worth of backups +# +# Database snapshots are written to 'data/demo_db_backups' by +# default. +AppConfig[:demo_db_backup_schedule] = "0 4 \* \* \*" +AppConfig[:demo\_db\_backup\_number\_to\_keep] = 7 +``` + +Solr indexes can always be [recreated](administration/indexes/) from the contents of the +database. For large sites, where recreating the indexes would take too long, it is possible to [backup and restore solr indexes](https://solr.apache.org/guide/solr/latest/deployment-guide/backup-restore.html). +In that case, you also need to backup and restore the files used by the indexers to mark which part of the data is already indexed: + +``` +docker cp archivesspace:/archivesspace/data/indexer_state /tmp/indexer_state +docker cp archivesspace:/archivesspace/data/indexer_pui_state /tmp/indexer_pui_state +``` + +## Creating backups of your database using the provided script + +ArchivesSpace provides simple scripts for windows and unix-like systems for backing up the database to a `.zip` file. + +### When using the embedded demo database + +Note: _NEVER use the demo database in production._. You can run: + +```shell +scripts/backup.sh --output /path/to/backup-yyyymmdd.zip +``` + +and the script will generate a file containing a snapshot of the demo database. + +### When using MySQL + +If you are running against MySQL and have `mysqldump` installed, you +can provide the `--mysqldump` option. This will read the +database settings from your configuration file and add a dump of your +MySQL database to the resulting `.zip` file. + +```shell +scripts/backup.sh --mysqldump --output ~/backups/backup-yyyymmdd.zip +``` + +## Recovering from backup + +When recovering an ArchivesSpace installation from backup, you will +need to restore your database (either the demo database or MySQL). + +After restoring your database, it is recommended to [recreate your solr indexes](administration/indexes/) + +### Recovering your database + +#### When managing your own MySQL + +If you are using MySQL, recovering your database just requires loading +your `mysqldump` backup into an empty database. If you are using the +`scripts/backup.sh` script (described above), this dump file is named +`mysqldump.sql` in your backup `.zip` file. + +To load a MySQL dump file, follow the directions in _Set up your MySQL +database_ to create an empty database with the appropriate +permissions. Then, populate the database from your backup file using +the MySQL client: + +```sql +`mysql -uas -p archivesspace < mysqldump.sql`, where + `as` is the user name + `archivesspace` is the database name + `mysqldump.sql` is the mysqldump filename +``` + +You will be prompted for the password of the user. + +#### When using the demo database + +If you are using the demo database, your backup `.zip` file will +contain a directory called `demo_db_backups`. Each subdirectory of +`demo_db_backups` contains a backup of the demo database. To +restore from a backup, copy its `archivesspace_demo_db` directory back +to your ArchivesSpace data directory. For example: + +```shell +cp -a /unpacked/zip/demo_db_backups/demo_db_backup_1373323208_25926/archivesspace_demo_db \ +/path/to/archivesspace/data/ +``` + +#### When running on Docker + +If you are using the Docker configuration package to run ArchivesSpace you can restore a database dump onto your `archivesspace` MySQL database with the following command on your host system shell: + +```shell +docker exec mysql mysql -uas -pas123 archivesspace < /tmp/db.2025-02-26.164907.sql +``` + +##### When using Docker Desktop + +On docker Desktop, you can import your sql file into the `/tmp/` directrory using the "Files" tab of your mysql container. Afterwards, on the "Exec" tab run the command: + +```shell +gunzip -c /tmp/db.2026-02-17.155254.sql.gz | mysql -u as -pas123 archivesspace +``` diff --git a/src/content/docs/es/administration/docker.md b/src/content/docs/es/administration/docker.md new file mode 100644 index 0000000..8488c78 --- /dev/null +++ b/src/content/docs/es/administration/docker.md @@ -0,0 +1,226 @@ +--- +title: Running with Docker +description: Instructions on setting up, running, and managing an ArchivesSpace installation using Docker. +--- + +## Docker images + +Starting with v4.0.0 ArchivesSpace officially supports using [Docker](https://www.docker.com/) as the easiest way to get up and running. Docker eases installing, upgrading, starting and stopping ArchivesSpace. It also makes it easy to setup ArchivesSpace as a system service that starts automatically on every reboot. + +If you prefer not to use Docker, another (more involved) way to get ArchivesSpace up and running is installing the latest [distribution `.zip` file](/getting_started/zip_distribution). + +ArchivesSpace Docker images are available on the [Docker hub](https://hub.docker.com/u/archivesspace). + +- main application images are built from [this Dockerfile](https://github.com/archivesspace/archivesspace/blob/master/Dockerfile) +- solr images are built from [this Dockerfile](https://github.com/archivesspace/archivesspace/blob/master/solr/Dockerfile) + +## Installing + +### System requirements + +ArchivesSpace on Docker has been tested on Ubuntu Linux, Mac OS X, and Windows. At least 1024 MB RAM are required. We recommend using at least 2 GB for optimal performance. + +### Software Dependencies + +When using Docker, the only software dependency is [Docker](https://www.docker.com/) itself. Follow the [instructions](https://docs.docker.com/get-started/get-docker/) to install the Docker engine. +Optionally installing [Docker Desktop](https://www.docker.com/products/docker-desktop/) provides a graphical way to manage, start and stop your docker containers, easily review the container logs etc. + +### Downloading the configuration package + +To run ArchivesSpace with Docker, first download the ArchivesSpace docker configuration package of the latest release from [github](https://github.com/archivesspace/archivesspace/releases) (scroll down to the "Assets" section of the latest release page and look for the zip file named `archivesspace-docker-${VERSION}.zip`). + +The downloaded configuration package contains a simple yet configurable and production ready docker-based setup intended to run on a single computer. + +### Contents of the configuration package + +Unzipping the downloaded file will create an `archivesspace` directory with the following contents: + +``` +. +├── backups +├── config +│ └── config.rb +├── locales +├── plugins +├── proxy-config +│ └── default.conf +├── sql +├── docker-compose.yml +├── stylesheets +└── .env +``` + +- The `backups` directory is first created once you start the application and it will contain the automatically performed backups of the database. See [Automated Backups section](#automated-database-backups). +- `config/config.rb` file contains the [main configuration](/customization/configuration/) of ArchivesSpace. +- The `locales` directory allows [customization of the UI text](/customization/locales/). +- The `plugins` directory is there to accommodate additional ArchivesSpace [plugins](/customization/plugins/). By default, it contains the [`local`](/customization/plugins/#adding-your-own-branding) and [`lcnaf`](https://github.com/archivesspace-plugins/lcnaf) plugins. +- `proxy-config/default.conf` contains the configuration of the bundled `nginx` see also [proxy configuration](#proxy-configuration). +- In the `sql` directory you can put your `.sql` database dump file to initialize the new database, see [next section](migrating-from-the-zip-distribution-to-docker). +- The `stylesheets` directory contain the files that are used to create PDFs and other files. +- `docker-compose.yml` contains all the information required by Docker to build and run ArchivesSpace. +- `.env` contains configuration of the docker containers including: + - Credentials used by archivespace to access its MySQL database. It is recommended to change the default root and user passwords to something safer. + - The database connection URI which should also be [updated accordingly](/customization/configuration/#database-config) after the database user password is updated in the step above. + +## Migrating from the zip distribution to docker + +If you are currently running ArchivesSpace using the zip file distribution, you can start using Docker instead. + +### Create a backup of your ArchivesSpace instance database + +Use `mysqldump` to create a dump of your MySQL database: + +```shell +mysqldump -uroot -p123456 -h 127.0.0.1 archivesspace > /tmp/db.$(date +%F.%H%M%S).sql +``` + +Follow the steps under the [Backup and recovery](/administration/backup/) section if you need more instructions on how create backups of your MySQL database. + +### Initialize and migrate the database on Docker + +Copy your `.sql` database dump file created above in the `sql` directory of your unzipped Docker configuration package. Make sure the filename includes the `.sql` extension. The file should be in plain text format (not zipped). +Docker will pick it up when it starts for the first time and restore the dump to your new database. + +If you created the dump on an earlier ArchivesSpace version, the system will apply any pending database migrations to upgrade your database to the ArchivesSpace version you are currently running on Docker. + +After the initial run you will want to remove that `.sql` file from the `sql` directory of your unzipped Docker configuration package. + +The docker configuration package already includes a configurable database backup mechanism for MySQL. Read more about it in the [backup and recovery section](/administration/backup/#using-the-docker-configuration-package). + +## Running + +### Resource limits + +We recommend allocating at least 2GB per container for optimal performance. If the host instance is devoted to running ArchivesSpace, it is advisable to configure no memory limit for Docker containers. + +When using Docker Desktop, a default memory limit is set to 50% of your host's memory. To increase the RAM and other resource limits when using Docker Desktop, see [the documentation](https://docs.docker.com/desktop/settings-and-maintenance/settings/#resources). + +When using Docker without Docker Desktop, no memory limit is set by default. See [Docker documenentation](https://docs.docker.com/engine/containers/resource_constraints/) if you need to set limits to the resources used by ArchivesSpace containers. + +### Note on migrating from the zip distribution + +If migrating from the zip distribution to Docker, you most probably have local MySQL and Solr instances running. Starting ArchivesSpace with Docker will start Docker-based MySQL and Solr instances. In order to avoid port binding conflicts, make sure that you stop your local MySQL and Solr instances before proceeding. + +### Start + +Open a terminal, change to the `archivespace` directory that contains the `docker-compose.yml` file and run: + +```shell +docker compose up --detach +``` + +The first time you start ArchivesSpace with Docker, the container images will be downloaded and configuration steps such as database setup and solr index initialization will be performed automatically. +It is expected that the whole process takes up to ten or even more minutes depending on the power of your machine and internet connection speed. **Note** if you are migrating from using the zip distribution to Docker and have already copied a dump of your database in the `sql` directory, initialization of the database and indexing it in solr can take a long time depending on the size of your data. + +Starting with the `--detach` option allows closing the terminal without stopping ArchivesSpace. Viewing the logs of running ArchivesSpace containers is possible in [Docker Desktop](https://www.docker.com/products/docker-desktop/) or in a terminal with: + +```shell +docker compose logs --follow +``` + +Watch the logs for the welcome message: + +``` +2024-12-04 18:42:17 archivesspace | ************************************************************ +2024-12-04 18:42:17 archivesspace | Welcome to ArchivesSpace! +2024-12-04 18:42:17 archivesspace | You can now point your browser to http://localhost:8080 +2024-12-04 18:42:17 archivesspace | ************************************************************ +``` + +Using the default proxy configuration, the Public User interface becomes available at http://localhost/ and the Staff User Interface at: http://localhost/staff/ (default login with: admin / admin) + +You can see the status of your running containers with: + +``` +docker ps +``` + +Which will give a listing like this: + +``` +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +6cd7114c1796 nginx:1.21 "/docker-entrypoint.…" 26 hours ago Up 29 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp proxy +9ed453c46a9f archivesspace/archivesspace:4.0.0 "/archivesspace/star…" 26 hours ago Up 29 minutes (healthy) 8080-8081/tcp, 8089-8090/tcp, 8092/tcp archivesspace +ec71dd3030b7 databack/mysql-backup:latest "/entrypoint dump" 26 hours ago Up 29 minutes db-backup +8b74aa374ec8 archivesspace/solr:4.0.0 "docker-entrypoint.s…" 26 hours ago Up 29 minutes 0.0.0.0:8983->8983/tcp, :::8983->8983/tcp solr +d2cf634744fe mysql:8 "docker-entrypoint.s…" 26 hours ago Up 29 minutes 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp mysql +``` + +If you have also [Docker Desktop](https://www.docker.com/products/docker-desktop/) installed, you can use it to start, stop and manage the ArchivesSpace containers after they have been created for the first time. Docker Desktop does have a built in terminal window that can be used to run Docker commands. + +### Stop + +The following commands need to run from `archivespace` directory that contains the `docker-compose.yml` file. You can stop running containers (without deleting) them with the command: + +```shell +docker compose stop +``` + +They can be started again with: + +```shell +docker compose up --detach +``` + +### Start a shell within a container to run the provided scripts + +You can get a `bash` shell on the container running the archivespace application and run the any of the scripts in the scripts directory with: + +```shell +$ docker exec -it archivesspace bash +archivesspace@9ed453c46a9f:/$ cd archivesspace/scripts/ +archivesspace@9ed453c46a9f:/archivesspace/scripts$ ls +backup.bat backup.sh ead_export.bat ead_export.sh find-base.sh initialize-plugin.bat initialize-plugin.sh password-reset.bat password-reset.sh rb setup-database.bat setup-database.sh +archivesspace@9ed453c46a9f:/archivesspace/scripts$ ./setup-database.sh +NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.base/java.io=ALL-UNNAMED +Loading ArchivesSpace configuration file from path: /archivesspace/config/config.rb +Loading ArchivesSpace configuration file from path: /archivesspace/config/config.rb +Loading ArchivesSpace configuration file from path: /archivesspace/config/config.rb +Detected MySQL connector 8+ +Running migrations against jdbc:mysql://db:3306/archivesspace?useUnicode=true&characterEncoding=UTF-8&user=[REDACTED]&password=[REDACTED]&useSSL=false&allowPublicKeyRetrieval=true&serverTimezone=UTC +All done. +``` + +### Copy files from and to your data directory + +The archivespace `data` directory is not exposed in the Docker Configuration package (as are `locales`, `config`, and `locales` making them easily accessible). This is due to issues we have had on Windows when exposing +the `data` directory instead of using a Docker volume for it. + +If you need to copy files from/to the `data` directory, or any other directory of the archivesspace installation, you can use [`docker cp`](https://docs.docker.com/reference/cli/docker/container/cp/) commands, such as: + +```shell +docker cp archivesspace:/archivesspace/data/indexer_state /tmp/indexer_state +docker cp ~/Desktop/test.png archivesspace:/archivesspace/data +``` + +## Automated database backups + +The Docker configuration package includes a mechanism that will perform periodic backups of your MySQL database, see the [Backup and Recovery](/administration/backup/#backups-when-using-the-docker-configuration-package) for more information. + +## Proxy Configuration + +The Docker configuration package includes an `nginx` based proxy that is by default binding on port 80 of the host machine (see `NGINX_PORT` variable in `.env` file). See `proxy-config/default.conf` and the [nginx docker page](https://hub.docker.com/_/nginx) for more configuration options. + +## Upgrading + +If you are already using the Docker configuration package and upgrading to a newer ArchivesSpace version, [download and extract](#downloading-the-configuration-package) the latest version of the Docker configuration package. + +### With solr configuration / schema changes + +If the ArchivesSpace version you are upgrading to includes solr configuration or schema changes (see the [release notes](https://github.com/archivesspace/archivesspace/releases)), then you need to recreate your solr core and re-index. Change to the `archivespace` directory where you extraced the fresh downloaded Docker configuration package and run: + +```shell +docker compose down solr app +docker volume rm archivesspace_app-data archivesspace_solr-data +docker compose pull +docker compose up -d --build --force-recreate +``` + +### Without solr configuration / schema changes + +If no solr configuration or schema changes are included, change to the extracted `architecture` directory and run: + +```shell +docker compose pull +docker compose up -d --build --force-recreate +``` diff --git a/src/content/docs/es/administration/getting_started.mdx b/src/content/docs/es/administration/getting_started.mdx new file mode 100644 index 0000000..5572750 --- /dev/null +++ b/src/content/docs/es/administration/getting_started.mdx @@ -0,0 +1,143 @@ +--- +title: Getting started +description: Detailed hardware and software requirements for running ArchivesSpace, including instructions on setting up and running an ArchivesSpace instance using the latest distribution .zip file. +--- + +import LatestReleaseBlurb from '@components/LatestReleaseBlurb.astro' + +## The latest release + +<LatestReleaseBlurb /> + +## Two installation methods + +There are two different ways to install ArchivesSpace: + +- Using Docker +- Using the `.zip` file distribution + +### Using Docker + +See the [Running with Docker](/administration/docker/) page for instructions on how to install ArchivesSpace using Docker. + +Starting with ArchivesSpace v4.0.0, the easiest and recommended way to get up and running is using Docker. This method eases installing, upgrading, starting, and stopping ArchivesSpace. It also makes it easy to setup ArchivesSpace as a system service that starts automatically on every reboot. + +### Using the `.zip` file distribution + +The older and more involved way is to install from the latest distribution `.zip` file as described below. + +#### System requirements + +##### Operating system + +ArchivesSpace is being tested on Ubuntu Linux, Mac OS X, and Windows. + +##### Memory + +At least 1024 MB RAM allocated to the application are required. We recommend using at least 2 GB for optimal performance. + +#### Software requirements + +When using the zip distribution, a Java runtime environment and a Solr instance are required. See [using Docker](/administration/docker/) to avoid these dependencies. + +##### Java Runtime Environment + +We recommend using [OpenJDK](https://openjdk.org/projects/jdk/). The following table lists the supported Java versions for each version of ArchivesSpace: + +| ArchivesSpace version | OpenJDK version | +| --------------------- | --------------- | +| ≤ v3.5.1 | 8 or 11 | +| v4.0.0 up to v4.1.1 | 11 or 17 | +| ≥ v4.2.0 | 17 or 21 | + +The Jruby version used in ArchivesSpace v4.2.0 is still compatible with java 11 we highly recommend using Java 17 or 21 as those are the Java versions ArchivesSpace v4.2.0 has been tested with. You can still use java 11 with v4.2.0 but the ArchivesSpace Program Team can provide support for environments using Java versions we have tested ArchivesSpace with (17 or 21). + +Note that in the next major release we expect to drop support for java 17 and only support java 21 and 25. + +##### Solr + +Up to ArchivesSpace v3.1.1, the zip file distribution includes an embedded Solr v4 instance, which is deprecated and not supported anymore. Use the Docker images provided on [ArchivesSpace Docker repository](https://hub.docker.com/orgs/archivesspace/repositories) and see also [using Docker](/administration/docker/) to avoid managing an external Solr instance. + +ArchivesSpace v3.2.0 or above requires an external Solr instance when running using the Zip distribution. The table below summarizes the supported Solr versions for each ArchivesSpace version: + +| ArchivesSpace version | External Solr version | +| --------------------- | ------------------------- | +| ≤ v3.1.1 | no external solr required | +| v3.2.0 up to v3.5.1 | 8 (8.11) | +| v4.0.0 up to v4.1.1 | 9 (9.4.1) | +| ≥ v4.2.0 | 9 (9.9.0) | + +Each ArchivesSpace version is tested for compatibility with the corresponding Solr version listed in the table above. Using the corresponding version of Solr is recommended as that version is being used during development and running the ArchivesSpace automated tests. + +If you need to use ArchivesSpace with an older version of Solr check the [release notes](https://github.com/archivesspace/archivesspace/releases) for any potential version compatibility issues. + +**Note: the ArchivesSpace Program Team can only provide support for Solr deployments +using the "officially" supported version with the standard configuration provided by +the application. Everything else will be treated as "best effort" community-led support.** + +See [Running with external Solr](/provisioning/solr) for more information on installing and upgrading Solr. + +##### Database + +While ArchivesSpace does include an embedded database, MySQL is required for production use. + +(While not officially supported by ArchivesSpace, some community members use MariaDB so there is some community support for version 10.4.10 only.) + +**The embedded database is for testing purposes only. You should use MySQL or MariaDB for any data intended for production, including data in a test instance that you intend to move over to a production instance.** + +All ArchivesSpace versions can run on MySQL version 5.x or 8.x. + +#### Install and run + +Download the distribution `.zip` for your version from [ArchivesSpace releases on GitHub](https://github.com/archivesspace/archivesspace/releases). + +Confirm a supported Java version is active on your PATH: + +```sh +java -version +``` + +Compare the output with [Java Runtime Environment](#java-runtime-environment). If needed, install a supported OpenJDK or point your environment at one (avoid using an unsupported newer Java as the default). + +Extract the `.zip`; it creates a directory named `archivesspace`. Before starting ArchivesSpace, finish provisioning: + +- [MySQL](/provisioning/mysql) +- JDBC driver: [Download MySQL Connector](/provisioning/mysql/#download-mysql-connector) +- External [Solr](/provisioning/solr) when your version requires it (ArchivesSpace v3.2.0 and later on the zip distribution; see [Solr](#solr)) + +**Do not proceed until MySQL and Solr (when required) are running.** + +Start ArchivesSpace from that directory. On Linux and macOS: + +```shell +cd /path/to/archivesspace +./archivesspace.sh +``` + +On Windows: + +```shell +cd \path\to\archivesspace +archivesspace.bat +``` + +This runs ArchivesSpace in the foreground (it stops when you close the terminal). By default, logs are written to `logs/archivesspace.out`. + +**Note:** On Windows, errors such as `unable to resolve type 'size_t'` or `no such file to load -- bundler` often mean the path to the `archivesspace` folder contains spaces. Use a path without spaces. + +##### Verify and sign in + +The first startup can take about a minute. Then confirm the services in a browser: + +- http://localhost:8089/ — backend +- http://localhost:8080/ — staff interface +- http://localhost:8081/ — public interface +- http://localhost:8082/ — OAI-PMH server +- http://localhost:8090/ — Solr admin console + +In the staff interface, sign in with the default administrator account: + +- Username: `admin` +- Password: `admin` + +Create a repository via **System** → **Manage repositories** (top right). From **System** you can manage users and other administration tasks. **Change the default `admin` password before production use.** diff --git a/src/content/docs/es/administration/index.md b/src/content/docs/es/administration/index.md new file mode 100644 index 0000000..91ff590 --- /dev/null +++ b/src/content/docs/es/administration/index.md @@ -0,0 +1,13 @@ +--- +title: Administration basics +description: Index of the administration pages for the tech-docs website. +--- + +- [Getting started](./getting_started) +- [Running ArchivesSpace as a Unix daemon](./unix_daemon) +- [Running ArchivesSpace as a Windows service](./windows) +- [Backup and recovery](./backup) +- [Re-creating indexes](./indexes) +- [Resetting passwords](./passwords) +- [Upgrading](./upgrading) +- [Log rotation](./logrotate) diff --git a/src/content/docs/es/administration/indexes.md b/src/content/docs/es/administration/indexes.md new file mode 100644 index 0000000..aef049f --- /dev/null +++ b/src/content/docs/es/administration/indexes.md @@ -0,0 +1,86 @@ +--- +title: Recreating indexes +description: Steps for performing soft reindexes and full reindexes of Solr, including internal and external Solr. +--- + +There are two strategies for reindexing ArchivesSpace: + +- soft reindex +- full reindex + +## Soft reindex + +A soft reindex updates the existing documents in Solr without directly +touching the actual index documents on the filesystem. This can be done +while the system is running and is suitable for most use cases. + +There are two common ways to perform a soft reindex: + +1. Delete indexer state files + +ArchivesSpace keeps track of what has been indexed by using the files +under `data/indexer_state` and `data/indexer_pui_state` (for the PUI). + +If these files are missing, the indexer assumes that nothing has been +indexed and reindexes everything. To force ArchivesSpace to reindex all +records, just delete the files in `/path/to/archivesspace/data/indexer_state` +and `/path/to/archivesspace/data/indexer_pui_state`. + +You also can do this selectively by record type, for example, to reindex +accessions in repository 2 delete the file called `2_accession.dat`. + +2. Bump `system_mtime` values in the database + +If you update a record's `system_mtime` it becomes eligible for reindexing. + +```sql +#reindex all resources +UPDATE resource SET system_mtime = NOW(); +#reindex resource 1 +UPDATE resource SET system_mtime = NOW() WHERE id = 1; +``` + +## Full reindex + +A full reindex is a complete rebuild of the index from the database. This +may be required if you are having indexer issues, in the case of index +corruption, or if called for by an upgrade owing to changes in ArchivesSpace's +Solr configuration. + +To perform a full reindex: + +### ArchivesSpace <= 3.1.0 (embedded Solr) + +- Shutdown ArchivesSpace +- Delete these directories: + - `rm -rf /path/to/archivesspace/data/indexer_state/` + - `rm -rf /path/to/archivesspace/data/indexer_pui_state/` + - `rm -rf /path/to/archivesspace/data/solr_index/` +- Restart ArchivesSpace + +### ArchivesSpace > 3.1.0 (external Solr) + +For external Solr there is a plugin that can perform all of the re-indexing steps: [aspace-reindexer](https://github.com/lyrasis/aspace-reindexer) + +Manual steps: + +- Shutdown ArchivesSpace +- Delete these directories: + - `rm -rf /path/to/archivesspace/data/indexer_state/` + - `rm -rf /path/to/archivesspace/data/indexer_pui_state/` +- Perform a delete all Solr query: + - `curl -X POST -H 'Content-Type: application/json' --data-binary '{"delete":{"query":"*:*" }}' http://${solrUrl}:${solrPort}/solr/archivesspace/update?commit=true` + - Windows PowerShell: + ``` + Invoke-RestMethod -Uri "http://localhost:8983/solr/archivesspace/update?commit=true" + -Method Post + -ContentType "application/json" + -Body '{"delete":{"query":"*:*"}}' + ``` +- Restart ArchivesSpace + +--- + +You can watch the [Tips for indexing ArchivesSpace](https://www.youtube.com/watch?v=yFJ6yAaPa3A) youtube video to see these steps performed. + +--- diff --git a/src/content/docs/es/administration/logrotate.md b/src/content/docs/es/administration/logrotate.md new file mode 100644 index 0000000..d96ce90 --- /dev/null +++ b/src/content/docs/es/administration/logrotate.md @@ -0,0 +1,28 @@ +--- +title: Log rotation +description: Details an example of how to set up log rotation, which helps keep the ArchivesSpace log file from growing excessively. +--- + +In order to prevent your ArchivesSpace log file from growing excessively, you can set up log rotation. How to set up log rotation is specific to your institution but here is an example logrotate config file with an explanation of what it does. + +`/etc/logrotate.d/` + +``` + /<install location>/archivesspace/logs/archivesspace.out { + daily + rotate 7 + compress + notifempty + missingok + copytruncate + } +``` + +this example configuration file: + +- rotates the logs daily +- keeps 7 days worth of logs +- compresses the logs so they take up less space +- ignores empty logs +- does not report errors if the log file is missing +- creates a copy of the original log file for rotation before truncating the contents of the original file diff --git a/src/content/docs/es/administration/passwords.md b/src/content/docs/es/administration/passwords.md new file mode 100644 index 0000000..088336b --- /dev/null +++ b/src/content/docs/es/administration/passwords.md @@ -0,0 +1,16 @@ +--- +title: Resetting passwords +description: How to run a script that resets a user's password within ArchivesSpace. +--- + +Under the `scripts` directory you will find a script that lets you +reset a user's password. You can invoke it as: + +``` +scripts/password-reset.sh theusername newpassword # or password-reset.bat under Windows +``` + +If you are running against MySQL, you can use this command to set a +password while the system is running. If you are running against the +demo database, you will need to shutdown ArchivesSpace before running +this script. diff --git a/src/content/docs/es/administration/unix_daemon.md b/src/content/docs/es/administration/unix_daemon.md new file mode 100644 index 0000000..ba8d9d3 --- /dev/null +++ b/src/content/docs/es/administration/unix_daemon.md @@ -0,0 +1,60 @@ +--- +title: Running as a Unix daemon +description: Steps for running ArchivesSpace in the background as a daemon using the startup script, and additional info on configuring startup/init settings. +--- + +The `archivesspace.sh` startup script doubles as an init script. If +you run: + +``` +archivesspace.sh start +``` + +ArchivesSpace will run in the background as a daemon (logging to +`logs/archivesspace.out` by default, as before). You can shut it down with: + +``` +archivesspace.sh stop +``` + +You can even install it as a system-wide init script by creating a +symbolic link: + +``` +cd /etc/init.d +ln -s /path/to/your/archivesspace/archivesspace.sh archivesspace +``` + +Note: By default ArchivesSpace will overwrite the log file when restarted. You +can change that by modifying `archivesspace.sh` and changing the `$startup_cmd` +to include double greater than signs: + +``` +$startup_cmd &>> \"$ARCHIVESSPACE_LOGS\" & +``` + +Then use the appropriate tool for your distribution to set up the +run-level symbolic links (such as `chkconfig` for RedHat or +`update-rc.d` for Debian-based distributions). + +Note that you may want to edit archivesspace.sh to set the account +that the system runs under, JVM options, and so on. + +For systems that use systemd you may wish to use a Systemd unit file for ArchivesSpace + +Something similar to this should work: + +``` +[Unit] +Description=ArchivesSpace Application +After=syslog.target network.target +[Service] +Type=forking +ExecStart=/path/to/your/archivesspace/archivesspace.sh start +ExecStop=/path/to/your/archivesspace/archivesspace.sh stop +PIDFile=/path/to/your/archivesspace/archivesspace.pid +User=archivesspace +Group=archivesspace +[Install] +WantedBy=multi-user.target +``` diff --git a/src/content/docs/es/administration/upgrading.md b/src/content/docs/es/administration/upgrading.md new file mode 100644 index 0000000..9c5376d --- /dev/null +++ b/src/content/docs/es/administration/upgrading.md @@ -0,0 +1,183 @@ +--- +title: Upgrading when using the zip distribution +description: Instructions on how to update ArchivesSpace. +--- + +If you have installed ArchivesSpace using the Docker Configuration Package, refer to [upgrading with Docker](/administration/docker/#upgrading). If you have installed ArchivesSpace using the zip distribution, read on! (In case you do not know what the difference is, see the [getting started page](/administration/getting_started/#two-ways-to-get-up-and-running)). + +You can upgrade most versions of ArchivesSpace to a later version using these general instructions. Typically you do not need to progress through other versions of ArchivesSpace to get to a later one, unless there are special considerations for a specific version. Special considerations for these versions are noted here and in release notes. + +- **[Special considerations when upgrading to v1.1.0](/administration/upgrading_1_1_0)** +- **[Special considerations when upgrading to v1.1.1](/administration/upgrading_1_1_1)** +- **[Special considerations when upgrading from v1.4.2 to 1.5.x (these considerations also apply when upgrading from 1.4.2 to any version through 2.0.1)](/administration/upgrading_1_5_0)** +- **[Special considerations when upgrading to 2.1.0](/administration/upgrading_2_1_0)** +- **[Changing to external Solr when upgrading to 3.2.0 or later versions](https://docs.archivesspace.org/provisioning/solr/).** + +## Create a backup of your ArchivesSpace instance + +You should make sure you have a working backup of your ArchivesSpace +installation before attempting an upgrade. Follow the steps +under the [Backup and recovery section](/administration/backup) to do this. + +## Unpack the new version + +It's a good idea to unpack a fresh copy of the version of +ArchivesSpace you are upgrading to. This will ensure that you are +running the latest versions of all files. In the examples below, +replace the lower case x with the version number updating to. For example, +1.5.2 or 1.5.3. + +For example, on Mac OS X or Linux: + +```shell +$ mkdir archivesspace-1.5.x +$ cd archivesspace-1.5.x +$ curl -LJO https://github.com/archivesspace/archivesspace/releases/download/v1.5.x/archivesspace-v1.5.x.zip +$ unzip -x archivesspace-v1.5.x.zip +``` + +( The curl step is optional and simply downloads the distribution from github. You can also +simply download the zip file in your browser and copy it to the directory ) + +On Windows, you can do the same by extracting ArchivesSpace into a new +folder you create in Windows Explorer. + +## Shut down your ArchivesSpace instance + +To ensure you get a consistent copy, you will need to shut down your +running ArchivesSpace instance now. + +## Copy your configuration and data files + +You will need to bring across the following files and directories from +your original ArchivesSpace installation: + +- the `data` directory (see **Indexes note** below) +- the `config` directory (see **Configuration note** below) +- your `lib/mysql-connector*.jar` file (if using MySQL) +- any plugins and local modifications you have installed in your `plugins` directory + +For example, on Mac OS X or Linux: + +```shell +$ cd archivesspace-1.5.x/archivesspace +$ cp -a /path/to/archivesspace-1.4.2/archivesspace/data/* data/ +$ cp -a /path/to/archivesspace-1.4.2/archivesspace/config/* config/ +$ cp -a /path/to/archivesspace-1.4.2/archivesspace/lib/mysql-connector* lib/ +$ cp -a /path/to/archivesspace-1.4.2/archivesspace/plugins/local plugins/ +$ cp -a /path/to/archivesspace-1.4.2/archivesspace/plugins/wonderful_plugin plugins/ +``` + +Or on Windows: + +``` +$ cd archivesspace-1.5.x\archivesspace +$ xcopy \path\to\archivesspace-1.4.2\archivesspace\data\* data /i /k /h /s /e /o /x /y +$ xcopy \path\to\archivesspace-1.4.2\archivesspace\config\* config /i /k /h /s /e /o /x /y +$ xcopy \path\to\archivesspace-1.4.2\archivesspace\lib\mysql-connector* lib /i /k /h /s /e /o /x /y +$ xcopy \path\to\archivesspace-1.4.2\archivesspace\plugins\local plugins\local /i /k /h /s /e /o /x /y +$ xcopy \path\to\archivesspace-1.4.2\archivesspace\plugins\wonderful_plugin plugins\wonderful_plugin /i /k /h /s /e /o /x /y +``` + +Note that you may want to preserve the logs file (`logs/archivesspace.out` +by default) from your previous installation--just in case you need to +refer to it later. + +### Configuration note + +Sometimes a new release of ArchivesSpace will introduce new +configuration settings that weren't present in previous releases. +Before you replace the distribution `config/config.rb` with your +original version, it's a good idea to review the distribution version +to see if there are any new configuration settings of interest. + +Upgrade notes will generally draw attention to any configuration +settings you need to set explicitly, but you never know when you'll +discover a new, exciting feature! Documentation might also refer to +uncommenting configuration options that won't be in your file if you +keep your older version. + +### Indexes note + +Sometimes a new release of ArchivesSpace will require a FULL reindex +which means you do not want to copy over anything from your data directory +to your new release. The data directory contains the indexes created by Solr. +Check the release notes of the new version for any details about reindexing and +the [recreating indexes section](/administration/indexes/) for instructions on recreating indexes. + +## Transfer your locales data + +If you've made modifications to your locales file ( en.yml ) with customized +labels, titles, tooltips, etc., you'll need to transfer those to your new +locale file. + +A good way to do this is to use a Diff tool, like Notepad++, TextMate, or just +Linux diff command: + +```shell +$ diff /path/to/archivesspace-1.4.2/locales/en.yml /path/to/archivesspace-1.5.x/archivesspace/locales/en.yml +$ diff /path/to/archivesspace-1.4.2/locales/enums/en.yml /path/to/archivesspace-v1.5.x/archivesspace/locales/enums/en.yml +``` + +This will show you the differences in your current locales files, as well as the +new additions in the new version locales files. Simply copy the values you wish +to keep from your old ArchivesSpace locales to your new ArchivesSpace locales/provisioning/solr/#copy-the-config-files +files. + +## Run the database migrations + +With everything copied, the final step is to run the database +migrations. This will apply any schema changes and data migrations +that need to happen as a part of the upgrade. To do this, use the +`setup-database` script for your platform. For example, on Mac OS X +or Linux: + +```shell +$ cd archivesspace-1.5.x/archivesspace +$ scripts/setup-database.sh +``` + +Or on Windows: + +```shell +$ cd archivesspace-1.5.x\archivesspace +$ scripts\setup-database.bat +``` + +## Solr configuration updates + +If the release you are upgrading to includes updates in the solr schema or other configuration files (see the release notes) +and you're using external Solr (required beginning with version 3.2.0), you will need to update the solr schema and configuration files +accordingly, by [copying the solr configuration files](/provisioning/solr/#copy-the-config-files) from the release package to your external solr configuration. +See also the [Full instructions for using external Solr with ArchivesSpace](/provisioning/solr). + +## If you've deployed to Tomcat + +The steps to deploy to Tomcat are esentially the same as in the +[archivesspace_tomcat](https://github.com/archivesspace-labs/archivesspace_tomcat) + +But, prior to running your setup-tomcat script, you'll need to be sure to clean out the +any libraries from the previous ASpace version from your Tomcat classpath. + + 1. Stop Tomcat + 2. Unpack your new version of ArchivesSpace + 3. Configure your MySQL database in the config.rb ( just like in the + install instructions ) + 4. Make sure all you other local configuration settings are in your + config.rb file ( check your Tomcat conf/config.rb file for your current + settings. ) + 5. Make sure you MySQL connector jar in the lib directory + 6. Run your setup-database script to migration your database. + 7. Delete all ASpace related jar libraries in your Tomcat's lib directory. These + will include the "gems" folder, as well as "common.jar" and some + [others](https://github.com/archivesspace/archivesspace/tree/master/common/lib). + This will make sure your running the correct version of the dependent + libraries for your new ASpace version. + Just be sure not to delete any of the Apache Tomcat libraries. + 8. Run your setup-tomcat script ( just like in the install instructions ). + This will copy all the files over to Tomcat. + 9. Start Tomcat + +## That's it! + +You can now start your new ArchivesSpace version as normal. diff --git a/src/content/docs/es/administration/upgrading_1_1_0.md b/src/content/docs/es/administration/upgrading_1_1_0.md new file mode 100644 index 0000000..868b49f --- /dev/null +++ b/src/content/docs/es/administration/upgrading_1_1_0.md @@ -0,0 +1,62 @@ +--- +title: Upgrading to 1.1.0 +description: Special considerations when upgrading from ArchivesSpace 1.0.9 or less to 1.1.0, including the option for an external Solr instance. +--- + +Additional upgrade considerations specific to this release. Refer to the [upgrade documentation](/administration/upgrading) for the standard instructions that apply in all cases. + +## External Solr + +--- + +In ArchivesSpace 1.0.9 the default ports configuration was: + +```ruby +AppConfig[:backend_url] = "http://localhost:8089" +AppConfig[:frontend_url] = "http://localhost:8080" +AppConfig[:solr_url] = "http://localhost:8090" +AppConfig[:public_url] = "http://localhost:8081" +``` + +With the introduction of the [optional external Solr instance](/provisioning/solr) functionality this has been updated to: + +```ruby +AppConfig[:backend_url] = "http://localhost:8089" +AppConfig[:frontend_url] = "http://localhost:8080" +AppConfig[:solr_url] = "http://localhost:8090" +AppConfig[:indexer_url] = "http://localhost:8091" # NEW TO 1.1.0 +AppConfig[:public_url] = "http://localhost:8081" +``` + +In most cases the default value for `indexer_url` will blend in seamlessly without you needing to take any action. However, if you modified the original values in your `config.rb` file you may need to update it. Examples: + +**You use a different ports sequence** + +```ruby +AppConfig[:indexer_url] = "http://localhost:9091" +``` + +**You run multiple ArchivesSpace instances on a single host** + +Under this deployment scenario you would have changed port numbers for some (or all) instances in each `config.rb` file, so set the `indexer_url` for each instance as described above. + +**You include hostnames** + +```ruby +AppConfig[:indexer_url] = "http://yourhostname:8091" +``` + +## Clustering + +--- + +In a clustered configuration you may need to edit `instance_[server hostname].rb` files: + +```ruby +{ + ... + :indexer_url => "http://[localhost|yourhostname]:8091", +} +``` + +--- diff --git a/src/content/docs/es/administration/upgrading_1_1_1.md b/src/content/docs/es/administration/upgrading_1_1_1.md new file mode 100644 index 0000000..1df7953 --- /dev/null +++ b/src/content/docs/es/administration/upgrading_1_1_1.md @@ -0,0 +1,58 @@ +--- +title: Upgrading to 1.1.1 +description: Instructions on how to resequence archival object and digital object components within the resource tree and details on a plugin to make PDFs available in the public interface. +--- + +Additional upgrade considerations specific to this release. Refer to the [upgrade documentation](/administration/upgrading) for the standard instructions that apply in all cases. + +## Resequencing of Archival Object & Digital Object Component trees + +--- + +There have been some scenarios in which archival objects and digital object components lose +some of the information used to order their hierarchy. This can result in issues in creation, +editing, or moving items in the tree, since there are database constraints to ensure uniqueness +of certain metadata elements. + +In order to ensure data integrity, there is now method to resequence the trees. This will +not reorder or edit the elements, but simply rebuild all the technical metadata used to establish +the ordering. + +To run the resequencing process, edit the config/config.rb file to have this line: + +```ruby +AppConfig[:resequence_on_startup] = true +``` + +and restart ArchivesSpace. This will trigger a rebuilding process after the application has +started. It's advised to let this rebuild process run its course prior to editing records. +This duration depends on the size of your database, which can take seconds ( for databases with +few Archival and Digital Objects ) to hours ( for databases with hundreds of thousands of records ). +Check your log file to see how the process is going. When it has finished, you should see the application +return to normal operation, generally with only indexer updates being recorded in the log file. + +After you've started ArchivesSpace, be sure to change the config.rb file to have the :resequence_on_startup +set to "false", since you will not need to run this process on every restart. + +## Export PDFs in the Public Interface + +--- + +A common request has been to have a PDF version of the EAD exported in the public application. +This has been a bit problematic, since EAD export has a rather large resource hit on the +database, which is only increased by the added process of PDF creation. We are currently +redesigning part of the ArchivesSpace backend to make PDF creation more user-friendly by +establishing a queue system for exports. + +In the meantime, Mark Cooper at Lyrasis has made a [ Public Metadata Formats plugin ](https://github.com/archivesspace-deprecated/aspace-public-formats) +that exposes certain metadata formats and PDFs in the public UI. This plugin has been included +in this release, but you will need to configure it to expose which formats you would like +to have exposed. Please read the plugin documentation on how to configure this. + +PLEASE NOTE: +Exporting large EAD resources with this plugin will most likely cause some problems. Long requests +will time out, since the server does not want to waste resources on long-running processes. +In addition, a large number of requests for PDFs can cause an increased load on the server. +Please be aware of these plugin issues and limitations before enabling it. + +--- diff --git a/src/content/docs/es/administration/upgrading_1_5_0.md b/src/content/docs/es/administration/upgrading_1_5_0.md new file mode 100644 index 0000000..fb5662a --- /dev/null +++ b/src/content/docs/es/administration/upgrading_1_5_0.md @@ -0,0 +1,147 @@ +--- +title: Upgrading to 1.5.0 +description: Upgrade instructions for upgrading from ArchivesSpace 1.4.2 or lower to 1.5.0, including details on the newest container management feature. +--- + +Additional upgrade considerations specific to this release, which also apply to upgrading from 1.4.2 or lower to any version through 2.0.1. Refer to the [upgrade documentation](/administration/upgrading) for the standard instructions that apply in all cases. + +## General overview + +The upgrade process to the new data model in 1.5.0 requires considerable data transformation and it is important for users to review this document to understand the implications and possible side-effects. + +A quick overview of the steps are: + +1. Review this document and understand how the upgrade will impact your data, paying particular attention to the [Preparation section](#preparation). +2. [Backup your database](/administration/backup). +3. No, really, [backup your database](/administration/backup). +4. It is suggested that [users start with a new solr index](/administration/indexes). To do this, delete the data/solr_index/index directory and all files in the data/indexer_state directory. The embedded version of Solr has been upgraded, which should result in a much more compact index size. +5. Follow the standard [upgrading instructions](/administration/upgrading). Important to note: The setup-database.sh|bat script will modify your database schema, but it will not move the data. If you are currently using the container management plugin you will need to remove it from the list of plugins in your config file prior to starting ArchivesSpace. +6. Start ArchivesSpace. When 1.5.0 starts for the first time, a conversion process will kick off and move the data into the new table structure. **During this time, the application will be unavailable until it completes**. Duration depends on the size of your data and server resources, with a few minutes for very small databases to several hours for very large ones. +7. When the conversion is done, the web application will start and the indexer will rebuild your index. Performance might be slower while the indexer runs, depending on your server environment and available resources. +8. Review the [output of the conversion process](#conversion) following the instructions below. How long it takes for the report to load will depend on the number of entries included in it. + +## Preparing for and Converting to the New Container Management Functionality + +With version 1.5.0, ArchivesSpace is adopting a new data model that will enable more capable and efficient management of the containers in which you store your archival materials. To take advantage of this improved functionality: + +- Repositories already using ArchivesSpace as a production application will need to upgrade their ArchivesSpace applications to the version 1.5.0. (This upgrade / conversion must be done to take advantage of any other new features / bug fixes in ArchivesSpace 1.5.0 or later versions.) +- Repositories not yet using ArchivesSpace in production but needing to migrate data from the Archivists’ Toolkit or Archon will need to migrate their data to version 1.4.2 of ArchivesSpace or earlier and then upgrade that version to version 1.5.0. (This can be done when your repository is ready to migrate to ArchivesSpace.) +- Repositories not yet using ArchivesSpace in production and not needing to migrate data from the Archivists’ Toolkit or Archon can start using Archivists 1.5.0 without the need of upgrading. (People in this situation do not need to read any further.) + +Converting the container data model in version 1.4.2 and earlier versions of ArchivesSpace to the 1.5.0 version has some complexity and may not accommodate all the various ways in which container information has been recorded by diverse repositories. As a consequence, upgrading from a pre-1.5.0 version of ArchivesSpace requires planning for the upgrade, reviewing the results, and, possibly, remediating data either prior to or after the final conversion process. Because of all the variations in which container information can be recorded, it is impossible to know all the ways the data of repositories will be impacted. For this reason, **all repositories upgrading their ArchivesSpace to version 1.5.0 should do so with a backup of their production ArchivesSpace instance and in a test environment.** A conversion may only be undone by reverting back to the source database. + +## Frequently Asked Questions + +_How will my data be converted to the new model?_ + +When your installation is upgraded to 1.5.0, the conversion will happen as part of the upgrade process. + +_Can I continue to use the current model for containers and not convert to the new model?_ + +Because it is such a substantial improvement (see the [new features list](#new-features-in-150) below), the new model is required for all using ArchivesSpace 1.5.0 and higher. The only way to continue using the current model is to never upgrade beyond 1.4.2. + +_What if I’m already using the container management plugin made available to the community by Yale University?_ + +Conversion of data created using the Yale container management plugin, or a local adaptation of the plugin, will also happen as part of the process of upgrading to 1.5.0. Some steps will be skipped when they are not needed. At the end of the process, the new container data model will be integrated into your ArchivesSpace and will not need to be loaded or maintained as a plugin. + +Those currently running the container management plugin will need to remove the container management plugin from the list in your config file prior to starting the conversion or a validation name error will occur. + +_I haven’t moved from Archivists’ Toolkit or Archon yet and am planning to use the associated migration tool. Can I migrate directly to 1.5.0?_ + +No, you must migrate to 1.4.2 or earlier versions and then upgrade your installation to 1.5.0 according to the instructions provided here. + +_What changes are being made to the previous model for containers?_ + +The biggest change is the new concept of top containers. A top container is the highest level container in which a particular instance is stored. Top containers are in some ways analogous to the current Container 1, but broken out from the entire container record (child and grandparent container records). As such, top containers enable more efficient recording and updating of the highest level containers in your collection. + +_How does ArchivesSpace determine what is a top container?_ + +During the conversion, ArchivesSpace will find all the Container 1s in your current ArchivesSpace database. It will then evaluate them as follows: + +- If containers have barcodes, one top container is created for each unique Container 1 barcode. +- If containers do not have barcodes, one top container is created for each unique combination of container 1 indicator and container type 1 within a resource or accession. +- Once a top container is created, additional instance records for the same container within an accession or resource will be linked to that top container record. + +## Preparation + +_What can I do to prepare my ArchivesSpace data for a smoother conversion to top containers?_ + +- If your Container 1s have unique barcodes, you do not need to do anything except verify that your data is complete and accurate. You should run a preliminary conversion as described in the Conversion section and resolve any errors. +- If your Container 1s do not have barcodes, but have a nonduplicative container identifier sequence within each accession or resource (e.g. Box 1, Box 2, Box 3), or the identifiers are only reused within an accession or resource for different types of containers (for example, you have a Box 1 through 10 and an Oversize Box 1 through 3) you do not need to do anything except verify that your data is complete and accurate. You should run a preliminary conversion as described in the Conversion section and resolve any errors. +- If your Container 1s do not have barcodes and you have parallel numbering sequences, where the same indicators and types are used to refer to different containers within the same accession or resource within some or all accessions or resources (for example, you have a Box 1 in series 1 and a different Box 1 in series 5) you will need to find a way to uniquely identify these containers. One option is to run this [barcoder plugin](https://github.com/archivesspace-plugins/barcoder) for each resource to which this applies. The barcoder plugin creates barcodes that combine the ID of the highest level archival object ancestor with the container 1 type and indicator. (The barcoder plugin is designed to run against one resource at a time, instead of against all resources, because not all resources in a repository may match this condition.) Once you’ve differentiated your containers with parallel number sequences, you should run a preliminary conversion as described in the Conversion section and resolve any errors. + +You do not need to make any changes to Container 2 fields or Container 3 fields. Data in these fields will be converted to the new Child and Grandchild container fields that map directly to these fields. + +If you use the current Container Extent fields, these will no longer be available in 1.5.0. Any data in these fields will be migrated to a new Extent sub-record during the conversion. You can evaluate whether this data should remain in an extent record or if it belongs in a container profile or other fields and then move it accordingly after the conversion is complete. + +_I have EADs I still need to import into ArchivesSpace. How can I get them ready for this new model?_ + +If you have a box and folder associated with a component (or any other hierarchical relationship of containers), you will need to add identifiers to the container element so that the EAD importer knows which is the top container. If you previously used Archivists' Toolkit to create EAD, your containers probably already have container identifiers. If your container elements do not have identifiers already, Yale University has made available an [XSLT transformation file](https://github.com/YaleArchivesSpace/xslt-files/blob/master/EAD_add_IDs_to_containers.xsl) to add them. You will need to run it before importing the EAD file into ArchivesSpace. + +## Conversion + +When upgrading from 1.4.2 (and earlier versions) to 1.5.0, the container conversion will happen as part of the upgrade process. You will be able to follow its progress in the log. Instructions for upgrading from a previous version of ArchivesSpace are available at [upgrade documentation](/administration/upgrading). + +Because this is a major change in the data model for this portion of the application, running at least one test conversion is very strongly recommended. Follow these steps to run the upgrade/conversion process: + +- Create a backup of your ArchivesSpace instance to use for testing. **IT IS ESSENTIAL THAT YOU NOT RUN THIS ON A PRODUCTION INSTANCE AS THE CONVERSION CHANGES YOUR DATA, and THE CHANGES CANNOT BE UNDONE EXCEPT BY REVERTING TO A BACKUP VERSION OF YOUR DATA PRIOR TO RUNNING THE CONVERSION.** +- Follow the upgrade instructions to unpack a fresh copy of the v 1.5.0 release made available for testing, copy your configuration and data files, and transfer your locales. +- **It is recommended that you delete your Solr index files to start with a fresh index** We are upgrading the version of Solr that ships with the application, and the upgrade will require a total reindex of your ArchivesSpace data. To do this, delete the data/solr_index/index directory and the files in data/indexer_state. +- Follow the upgrade instructions to run the database migrations. As part of this step, your container data will be converted to the new data model. You can follow along in the log. Windows users can open the archivesspace.out file in a tool like Notepad ++. Mac users can do a tail –f logs/archivesspace.out to get a live update from the log. +- When the test conversion has been completed, the log will indicate "Completed: existing containers have been migrated to the new container model." + +![Image of Conversion Log](../../../../images/ConversionLog.png) + +- Open ArchivesSpace via your browser and login. + Retrieve the container conversion error report from the Background Jobs area: +- Select Background Jobs from the Settings menu. + +![Image of Background Jobs](../../../../images/BackgroundJobs.png) + +- The first item listed under Archived Jobs after completing the upgrade should be container_conversion_job. Click View. + +![Image of Background Jobs List](../../../../images/BackgroundJobsList.png) + +- Under Files, click File to download a CSV file with the errors and a brief explanation. + +![Image of Files](../../../../images/Files.png) + +![Image of Error Report](../../../../images/ErrorReport.png) + +- Go back to your source data and correct any errors that you can before doing another test conversion. +- When the error report shows no errors, or when you are satisfied with the remaining errors, your production instance is ready to be upgraded. +- When the final upgrade/conversion is complete, you can move ArchivesSpace version 1.5.0 into production. + +_What are some common errors or anomalies that will be flagged in the conversion?_ + +- A container with a barcode has different indicators or types in different records. +- A container with a particular type and indicator sometimes has a barcode and sometimes doesn’t. +- A container is missing a type or indicator. +- Container levels are skipped (for example, there is a Container 1 and a Container 3, but no Container 2). +- A container has multiple locations. + +The conversion process can resolve some of these errors for you by supplying or deleting values as it deems appropriate, but for the most control over the process you will most likely want to resolve such issues yourself in your ArchivesSpace database before converting to the new container model. + +_Are there any known conversion issues?_ + +Due to a change in the ArchivesSpace EAD importer in 2015, some EADs with hierarchical containers not designated by a @parent attribute were turned into multiple instance records. This has since been corrected in the application, but we are working on a plugin (now available at [Instance Joiner Plugin](https://github.com/archivesspace-plugins/instance_joiner) that will enable you to turn these back into single instances so that subcontainers are not mistakenly turned into top containers. + +## New features in 1.5.0 + +**Top containers replace Container 1s.** Unlike Container 1s in the current version of ArchivesSpace, top containers in the upcoming version can be defined once and linked many times to various archival objects, resources, and accessions. + +**The ability to create container profiles and associate them with top containers.** Optional container profiles allow you to track information about the containers themselves, including dimensions. + +**Extent calculator.** In conjunction with container profiles, the new extent calculator allows you to easily see extents for accessions, resources, or resource components. Optionally, you can use the calculator to generate extent records for an accession, resource, or resource component. + +**Bulk operations for containers.** The Manage Top Containers area provides more efficient ways to work with multiple containers, including the ability to add or edit barcodes, change locations, and delete top containers in bulk. + +**The ability to "share" boxes across collections in a meaningful way.** You can define top containers separately from individual accessions and resources and access them from multiple accession and resource records. For example, this might be helpful for recording information about an oversize box that contains items from many collections. + +**The ability to store data that will help you synchronize between ArchivesSpace and item records in your ILS.** If your institution creates item records in its ILS for containers, you can now record that information within ArchivesSpace as well. + +**The ability to store data about the restriction status of material associated with a container.** You can now see at a glance whether any portion of the contents of a container is restricted. + +**Machine-actionable restrictions.** You will now have the ability to associate begin and end dates with "conditions governing access" and "conditions governing use" Notes. You'll also be able to associate a local restriction type for non-time-bound restrictions. This gives the ability to better manage and re-describe expiring restrictions. + +For more information on using the new features, consult the user manual, particularly the new section titled Managing Containers (available late April 2016). diff --git a/src/content/docs/es/administration/upgrading_2_1_0.md b/src/content/docs/es/administration/upgrading_2_1_0.md new file mode 100644 index 0000000..05b8e8e --- /dev/null +++ b/src/content/docs/es/administration/upgrading_2_1_0.md @@ -0,0 +1,30 @@ +--- +title: Upgrading to 2.1.0 +description: Instructions on upgrading to ArchivesSpace 2.1.0 if coming from 1.4.2 or below, Archivists' Toolkit or Archon, or if using an external Solr server, in addition to notes on rights statement data migration. +--- + +Additional upgrade considerations specific to this release. Refer to the [upgrade documentation](/administration/upgrading) for the standard instructions that apply in all cases. + +:::note +These considerations also apply when upgrading to any version past 2.1.0 from a version prior to 2.1.0. +::: + +## For those upgrading from 1.4.2 and lower + +Following the merge of the Container Management Plugin in 1.5.0, ArchivesSpace still retained the old container model and had a number of dependencies on it. This imposed unnecessary complexity and some performance degradation on the system. + +In this release all references to the old container model have been removed and the parts of the application that were dependent on it (for example, Imports and Exports) have been refactored to use the new container model. + +A consequence of this change is that if you are upgrading from ArchivesSpace version of 1.4.2 or lower, you will need to first upgrade to any version between 1.5.0 and 2.0.1 to run the container conversion. You will then be able to upgrade to 2.1.0. If you are already using any version of ArchivesSpace between 1.5.0 and 2.0.1, you will be able to upgrade directly to 2.1.0. + +## For those needing to migrate data from Archivists' Toolkit or Archon using the migration tools + +The migration tools are currently supported through version 1.4.2 only. If you want to migrate data to ArchivesSpace using one of these tools, you must migrate it to 1.4.2. From there you can follow the instructions for those upgrading from 1.4.2 and lower. + +## Data migrations in this release + +The rights statements data model has changed in 2.1.0. If you currently use rights statements, your data will be converted to the new model during the setup-database step of the upgrade process. We strongly urge you to backup your database and run at least one test upgrade before putting 2.1.0 into production. + +## For those using an external Solr server + +The index schema has changed with 2.1.0. If you are using an external Solr server, you will need to update the [schema.xml](https://github.com/archivesspace/archivesspace/blob/master/solr/schema.xml) with the newer version. If you are using the default Solr index that ships with ArchivesSpace, no action is needed. diff --git a/src/content/docs/es/administration/windows.md b/src/content/docs/es/administration/windows.md new file mode 100644 index 0000000..a34b237 --- /dev/null +++ b/src/content/docs/es/administration/windows.md @@ -0,0 +1,60 @@ +--- +title: Running as a Windows service +description: Instructions on how to set up ArchivesSpace as a Windows service. +--- + +Running ArchivesSpace as a Windows service requires some additional configuration. + +You can use Apache [procrun](http://commons.apache.org/proper/commons-daemon/procrun.html) to configure ArchivesSpace to run as a Windows service. We have provided a service.bat script that will attempt to configure procrun for you (under `launcher\service.bat`). + +To run this script, first you need to [download procrun](http://www.apache.org/dist/commons/daemon/binaries/windows/). +Extract the files and copy the prunsrv.exe and prunmgr.exe to your ArchivesSpace directory. + +To find the path to Java, "Start" > "Control Panel" > "Java", Select "Java" tab. You'll see the path there. It will look something like `C:\Program Files (x86)\Java` + +You also need to be sure that Java is in your system path and also to create `JAVA_HOME` as a global environment variable. +To add Java to your path, edit you %PATH% environment variable to include the directory of your java executable ( it will be something like `C:\Program Files (x86)\Java` ). To add `JAVA_HOME`, add a new system variable and put the directory where java was installed ( something like `C:\Program Files (x86)\Java` ). + +Environment variables can be found by going to "Start" > "Control Panel", search for environment. Click "edit the system environment variables". In the section "System Variables", find the `PATH` environment variable and select it. Click Edit. If the `PATH` environment variable does not exist, click New. In the Edit System Variable (or New System Variable) window, specify the value of the `PATH` environment variable. Click OK. Close all remaining windows by clicking OK. Do the same for `JAVA_HOME`. + +Before setting up the ArchivesSpace service, you should also [configure ArchivesSpace to run against MySQL](/provisioning/mysql). +Be sure that the MySQL connector jar file is in the lib directory, in order for +the service setup script to add it to the application's classpath. + +Lastly, for the service to shutdown cleanly, uncomment and change these lines in +config/config.rb: + +```ruby +AppConfig[:use_jetty_shutdown_handler] = true +AppConfig[:jetty_shutdown_path] = "/xkcd" +``` + +This enables a shutdown hook for Jetty to respond to when the shutdown action +is taken. + +You can now execute the batch script from your ArchivesSpace root directory from +the command line with `launcher\service.bat`. This will configure the service and +provide two executables: `ArchivesSpaceService.exe` (the service) and +`ArchivesSpaceServicew.exe` (a GUI monitor) + +There are several options to launch the service. The easiest is to open the GUI +monitor and click "Launch". + +Alternatively, you can start the GUI monitor and minimize it in your +system tray with: + +```shell +ArchivesSpaceServicew.exe //MS// +``` + +To execute the service from the command line, you can invoke: + +```shell +ArchivesSpaceService.exe //ES// +``` + +Log output will be placed in your ArchivesSpace log directory. + +Please see the [procrun +documentation](http://commons.apache.org/proper/commons-daemon/procrun.html) +for more information. diff --git a/src/content/docs/es/api/index.md b/src/content/docs/es/api/index.md new file mode 100644 index 0000000..3f79dc2 --- /dev/null +++ b/src/content/docs/es/api/index.md @@ -0,0 +1,486 @@ +--- +title: Working with the API +description: General information about working with the API, including authentication, get, and post requests with examples. +--- + +:::tip +This documentation provides general information on working with the API. For detailed documentation of specific endpoints, see the [API reference](http://archivesspace.github.io/archivesspace/api/), which is maintained separately. +::: + +## Authentication + +Most actions against the backend require you to be logged in as a user +with the appropriate permissions. By sending a request like: + + POST /users/admin/login?password=login + +your authentication request will be validated, and a session token +will be returned in the JSON response for your request. To remain +authenticated, provide this token with subsequent requests in the +`X-ArchivesSpace-Session` header. For example: + + X-ArchivesSpace-Session: 8e921ac9bbe9a4a947eee8a7c5fa8b4c81c51729935860c1adfed60a5e4202cb + +Since not all backend/API end points require authentication, it is best to restrict access to port 8089 to only IP addresses you trust. Your firewall should be used to specify a range of IP addresses that are allowed to call your ArchivesSpace API endpoint. This is commonly called whitelisting or allowlisting. + +### Example requests using CURL + +Send request to authenticate: + +```shell +curl -s -F password="admin" "http://localhost:8089/users/admin/login" +``` + +This will return a JSON response that includes something like the following: + +<!-- prettier-ignore --> +```json +{ + "session":"9528190655b979f00817a5d38f9daf07d1686fed99a1d53dd2c9ff2d852a0c6e", + .... +} +``` + +It’s a good idea to save the session key as an environment variable to use for later requests: + +```shell +#Mac/Unix terminal +export SESSION="9528190655b979f00817a5d38f9daf07d1686fed99a1d53dd2c9ff2d852a0c6e" + +#Windows Command Prompt +set SESSION="9528190655b979f00817a5d38f9daf07d1686fed99a1d53dd2c9ff2d852a0c6e" + +#Windows PowerShell +$env:SESSION="9528190655b979f00817a5d38f9daf07d1686fed99a1d53dd2c9ff2d852a0c6e" +``` + +Now you can make requests like this: + +```shell +curl -H "X-ArchivesSpace-Session: $SESSION" "http://localhost:8089/repositories/2/resources/1 +``` + +## CRUD + +The ArchivesSpace API provides CRUD-style interactions for a number of +different "top-level" record types. Working with records follows a +fairly standard pattern: + + # Get a paginated list of accessions from repository '123' + GET /repositories/123/accessions?page=1 + + # Create a new accession, returning the ID of the new record + POST /repositories/123/accessions + {... a JSON document satisfying JSONModel(:accession) here ...} + + # Get a single accession (returned as a JSONModel(:accession) instance) using the ID returned by the previous request + GET /repositories/123/accessions/456 + + # Update an existing accession + POST /repositories/123/accessions/456 + {... a JSON document satisfying JSONModel(:accession) here ...} + +## Performing API requests + +### GET requests + +#### Resolving associated records + +The :resolve parameter is a way to tell ArchivesSpace to attach the full object to these refs; it is passed in as an +array of keys to "prefetch" in the returned JSON. The object is included in the ref under a \_resolved key. + +For example, to find an archival object by a ref_id and return the found archival object, you can attach +`resolve[]: "archival_objects"` within your request. + +##### Shell Example + +> ```shell +> curl -s -F password="admin" "http://localhost:8089/users/admin/login" +> # Replace "admin" with your password and "http://localhost:8089/users/admin/login" with your ASpace API URL +> # followed by "/users/{your_username}/login" +> +> set SESSION="session_id" +> # If using a unix-like shell, replace set with export +> +> curl -H "X-ArchivesSpace-Session: $SESSION" // +> "http://localhost:8089/repositories/:repo_id:/find_by_id/archival_objects?ref_id[]=hello_im_a_ref_id;resolve[]=archival_objects" +> # Replace "http://localhost:8089" with your ASpace API URL, :repo_id: with the repository ID, +> # "hello_im_a_ref_id" with the ref ID you are searching for, and only add +> # "resolve[]=archival_objects" if you want the JSON for the returned record - otherwise, it will return the +> # record URI only +> ``` + +##### Python Example + +> ```python +> from asnake.client import ASnakeClient # import the ArchivesSnake client +> +> client = ASnakeClient(baseurl="http://localhost:8089", username="admin", password="admin") +> # Replace "http://localhost:8089" with your ArchivesSpace API URL and "admin" for your username and password +> +> client.authorize() # authorizes the client +> +> find_ao_refid = client.get("repositories/:repo_id:/find_by_id/archival_objects", +> params={"ref_id[]": "hello_im_a_ref_id", +> "resolve[]": "archival_objects"}) +> # Replace :repo_id: with the repository ID, "hello_im_a_ref_id" with the ref ID you are searching for, and only add +> # "resolve[]": "archival_objects" if you want the JSON for the returned record - otherwise, it will return the +> # record URI only +> +> print(find_ao_refid.json()) +> # Output (dict): {'archival_objects': [{'ref': '/repositories/2/archival_objects/708425', '_resolved':...}]} +> ``` + +#### Requests for paginated results + +Endpoints that represent groups of objects, rather than single objects, tend to be paginated. Paginated endpoints are called out in the documentation as special, with some version of the following content appearing: +This endpoint is paginated. :page, :id_set, or :all_ids is required + + Integer page – The page set to be returned + Integer page_size – The size of the set to be returned ( Optional. default set in AppConfig ) + Comma separated list id_set – A list of ids to request resolved objects ( Must be smaller than default page_size ) + Boolean all_ids – Return a list of all object ids + +These endpoints support some or all of the following: + + paged access to objects (via :page) + listing all matching ids (via :all_ids) + fetching specific known objects via their database ids (via :id_set) + +##### Shell Example + +> ```shell +> curl -s -F password="admin" "http://localhost:8089/users/admin/login" +> # Replace "admin" with your password and "http://localhost:8089/users/admin/login" with your ASpace API URL +> # followed by "/users/{your_username}/login" +> +> set SESSION="session_id" +> # If using a unix-like shell, replace set with export +> +> # For all archival objects, use all_ids +> curl -H "X-ArchivesSpace-Session: $SESSION" // +> "http://localhost:8089/repositories/2/archival_objects?all_ids=true" +> +> # For a set of archival objects, use id_set +> curl -H "X-ArchivesSpace-Session: $SESSION" // +> "http://localhost:8089/repositories/2/archival_objects?id_set=707458&id_set=707460&id_set=707461" +> +> # For a page of archival objects, use page and page_size +> "http://localhost:8089/repositories/2/archival_objects?page=1&page_size=10" +> ``` + +> Python example needed + +#### Working with long results sets + +When working with search results using page and page_size parameters, many results can be returned and managing those +results can be difficult. See the Python example below for demonstrating how to take a large result set and iterating +through it to search for archival objects from a paginated result. + +##### Python Example + +> ```python +> from asnake.client import ASnakeClient # import the ArchivesSnake client +> +> client = ASnakeClient(baseurl="http://localhost:8089", username="admin", password="admin") +> # Replace http://localhost:8089 with your ArchivesSpace API URL and admin for your username and password +> +> client.authorize() # authorizes the client +> +> # To get a page of archival objects with a set page size, use "page" and "page_size" parameters +> get_repo_aos_pages = client.get("repositories/2/archival_objects", params={"page": 1, "page_size": 10}) +> # Replace 2 for your repository ID. Find this in the URI of your archival object on the bottom right of the +> # Basic Information section in the staff interface +> +> print(get_repo_aos_pages.json()) +> # Output (dictionary): {'first_page': 1, 'last_page': 26949, 'this_page': 1, 'total': 269488, +> # 'results': [{'lock_version': 1, 'position': 0,...]...} +> +> result_count = len(get_repo_aos_pages.json()) # Get us the count of results back +> for result in get_repo_aos_pages.json(): +> json_info = json.loads(result["json"]) +> for key, value in json_info.items(): +> id_match = id_field_regex.match(key) +> ``` + +#### Search requests + +A number of routes in the ArchivesSpace API are designed to search for content across all or part of the records in the +application. These routes make use of Solr, a component bundled with ArchivesSpace and used to provide full text search +over records. + +The search routes present in the application as of this time are: + +- Search this archive +- Search across repositories +- Search this repository +- Search across subjects +- Search for top containers +- Search across location profiles + +Search routes take quite a few different parameters, most of which correspond directly to Solr query parameters. The +most important parameter to understand is q, which is the query sent to Solr. This query is made in Lucene query +syntax. The relevant docs are in the Solr Ref Guide's [The Standard Query Parser](https://solr.apache.org/guide/6_6/the-standard-query-parser.html#the-standard-query-parser) webpage. + +To limit a search to records of a particular type or set of types, you can use the 'type' parameter. This is only +relevant for search endpoints that aren't limited to specific types. Note that type is expected to be a list of types, +even if there is only one type you care about. + +##### Notes on search routes and results + +ArchivesSpace represents records as JSONModel Objects - this is what you get from and send to the system. + +SOLR takes these records, and stores "documents" BASED ON these JSONModel objects in a searchable index. + +Search routes query these documents, NOT the records themselves as stored in the database and represented by JSONModel. + +JSONModel objects and SOLR documents are similar in some ways: + +- Both SOLR documents and JSONModel Objects are expressed in JSON +- In general, documents will always contain some subset of the JSONModel object they represent + +But they also differ in quite a few important ways: + +- SOLR documents don't necessarily have all fields from a JSONModel object +- SOLR documents do not automatically contain nested JSONModel Objects +- SOLR documents can have fields defined that are arbitrary "search representations" of fields in associated records, + or combinations of fields in a record +- SOLR documents don't have a jsonmodel_type field - the jsonmodel_type of the record is stored as primary_type in SOLR + +How do I get the actual JSONModel from a search document? + +In ArchivesSpace, SOLR documents all have a field json, which contains the JSONModel Object the document represents as +a string. You can use a JSON library to parse this string from the field, for example the json library in Python. + +##### Shell Example + +> ```shell +> +> # auto-generated example +> curl -H "X-ArchivesSpace-Session: $SESSION" \ +> "http://localhost:8089/search/repositories?q=&aq=%7B%22jsonmodel_type%22%3D%3E%22advanced_query%22%2C+%22query%22%3D%3E%7B%22jsonmodel_type%22%3D%3E%22boolean_query%22%2C+%22op%22%3D%3E%22AND%22%2C+%22subqueries%22%3D%3E%5B%7B%22jsonmodel_type%22%3D%3E%22date_field_query%22%2C+%22negated%22%3D%3Efalse%2C+%22comparator%22%3D%3E%22empty%22%2C+%22field%22%3D%3E%22QSUC205%22%2C+%22value%22%3D%3E%222018-03-26%22%7D%5D%7D%7D&type%5B%5D=&sort=&facet%5B%5D=&facet_mincount=1&filter=%7B%22jsonmodel_type%22%3D%3E%22advanced_query%22%2C+%22query%22%3D%3E%7B%22jsonmodel_type%22%3D%3E%22boolean_query%22%2C+%22op%22%3D%3E%22AND%22%2C+%22subqueries%22%3D%3E%5B%7B%22jsonmodel_type%22%3D%3E%22date_field_query%22%2C+%22negated%22%3D%3Efalse%2C+%22comparator%22%3D%3E%22empty%22%2C+%22field%22%3D%3E%22QSUC205%22%2C+%22value%22%3D%3E%222018-03-26%22%7D%5D%7D%7D&filter_query%5B%5D=&exclude%5B%5D=&hl=BooleanParam&root_record=&dt=&fields%5B%5D=" +> +> # auto-generated example +> curl -H 'Content-Type: text/json' -H "X-ArchivesSpace-Session: $SESSION" \ +> "http://localhost:8089/search/repositories" \ +> -d '{ +> "aq": { +> "jsonmodel_type": "advanced_query", +> "query": { +> "jsonmodel_type": "boolean_query", +> "op": "AND", +> "subqueries": [ +> { +> "jsonmodel_type": "date_field_query", +> "negated": false, +> "comparator": "empty", +> "field": "QSUC205", +> "value": "2018-03-26" +> } +> ] +> } +> }, +> "facet_mincount": "1", +> "filter": { +> "jsonmodel_type": "advanced_query", +> "query": { +> "jsonmodel_type": "boolean_query", +> "op": "AND", +> "subqueries": [ +> { +> "jsonmodel_type": "date_field_query", +> "negated": false, +> "comparator": "empty", +> "field": "QSUC205", +> "value": "2018-03-26" +> } +> ] +> } +> }, +> "hl": "BooleanParam" +> }' +> ``` + +### POST requests + +#### Updating existing records + +For updating existing records, it's recommended to first do a GET request for the record you want to update. This +ensures that the data you are updating is the most accurate and reduces the chance of inadvertently removing data that +was there previously but may be lost if the data is not included in the subsequent update. After getting the original +record data, you can update it as needed and then do a POST request with the updated data. Make sure that the updated +data is JSON formatted and is passed either through the `-d` or `--data` parameter or `json` parameter if using +ArchivesSnake. + +##### Shell Example + +> ```shell +> curl -s -F password="admin" "http://localhost:8089/users/admin/login" +> # Replace "admin" with your password and "http://localhost:8089" with your ASpace API URL followed by +> # "/users/{your_username}/login" +> +> set SESSION="session_id" +> # If using a unix-like shell, replace set with export +> +> curl -H 'Content-Type: text/json' -H "X-ArchivesSpace-Session: $SESSION" \\ +> "http://localhost:8089/repositories/:repo_id:/groups/:group_id:" \\ +> -d '{"group_code": "test-group_managers", +> "lock_version": 4, +> "description": "Test group managers of the Manuscripts repository", +> "jsonmodel_type": "group", +> "member_usernames": [ +> "manager", "advance"]}' +> # Replace http://localhost:8089 with your ArchivesSpace API URL, :repo_id: with the repository ID number, +> # :group_id: with the group ID number you want to update, and the data found after -d with the data you want +> # updating the group. Be sure to include "lock_version" and the most recent number for it. You can find the +> # most recent lock_version by submitting a get request, like so: curl -H "X-ArchivesSpace-Session: $SESSION" \ +> # "http://localhost:8089/repositories/:repo_id:/groups/:group_id:" +> +> # Output: +> # {"status":"Updated","id":23,"lock_version":5,"stale":null,"uri":"/repositories/2/groups/23","warnings":[]} +> ``` + +##### Python Example + +> ```python +> from asnake.client import ASnakeClient # import the ArchivesSnake client +> client = ASnakeClient(baseurl="http://localhost:8089", username="admin", password="admin") +> # replace http://localhost:8089 with your ArchivesSpace API URL and admin for your username and password +> +> client.authorize() # authorizes the client +> +> get_user_group = client.get("repositories/:repo_id:/groups/:group_id:").json() +> # Retrieve the data from the group you are trying to update. Replace :repo_id: with the repository ID number and +> # :group_id: with the group ID number you want to update +> +> get_user_group["member_usernames"].append("advance") +> # An example of how to modify a group record. For a list of all the fields you can update, +> # print(get_user_group). Here we append a new user 'advance' to the list of users associated with this group. +> +> update_user_group = get_user_group +> # Assign the newly updated get_user_group to update_user_group - to help make it clearer to see. +> +> update_status = client.post("repositories/:repo_id:/groups/:group_id:", json=update_user_group) +> # Replace :repo_id: with the repository ID number and :group_id: with the group ID number you want to update +> +> print(update_status.json()) +> # Output: +> # {'status': 'Updated', 'id': 48, 'lock_version': 1, 'stale': None, 'uri': '/repositories/2/groups/48', +> # 'warnings': []} +> ``` + +#### Creating new records + +When creating new records, it's recommended to do a GET request on the type of record you are wanting to create. This +example record is useful for seeing what fields are included for that specific record. Not all fields are required, for +example, the `created` and `modified` fields are not necessary when creating a new record, since those fields are +handled automatically. Others, such as `title` and `jsonmodel_type` are required. + +After examining an existing record for reference, craft your JSON-formatted data and make a POST request. Make sure +that the new record is passed either through the `-d` or `--data` parameter or `json` parameter if using ArchivesSnake. + +##### Shell Example + +> ```shell +> # Create a new user group within the SHELL +> curl -s -F password="admin" "http://localhost:8089/users/admin/login" +> # Replace "admin" with your password and "http://localhost:8089" with your ASpace API URL followed by +> # "/users/{your_username}/login" +> +> set SESSION="session_id" +> # If using a unix-like shell, replace set with export +> +> curl -H "X-ArchivesSpace-Session: $SESSION" "http://localhost:8089/repositories/:repo_id:/groups/" \\ +> -d '{"group_code": "test-group_managers", +> "description": "Test group managers of the Manuscripts repository", +> "jsonmodel_type": "group"}' +> # Replace "http://localhost:8089" with your ASpace API URL, :repo_id: with the repository ID, and +> # the data found in -d with the metadata you want to create the new user group. +> +> # Output +> # {"status":"Created","id":24,"lock_version":0,"stale":null,"uri":"/repositories/2/groups/24","warnings":[]} +> ``` + +##### Python Example + +> ```python +> # Create a new user group using Python and ArchivesSnake +> from asnake.client import ASnakeClient # import the ArchivesSnake client +> +> client = ASnakeClient(baseurl="http://localhost:8089", username="admin", password="admin") +> # replace http://localhost:8089 with your ArchivesSpace API URL and admin for your username and password +> +> client.authorize() # authorizes the client +> +> new_group = { +> "group_code": "test-group_managers", +> "description": "Test group managers of the Manuscripts repository", +> "jsonmodel_type": "group", +> "member_usernames": [ +> "manager" +> ], +> "grants_permissions": [ +> "cancel_job", +> "manage_enumeration_record"] +> } +> # This is a sample user group that exceeds the minimum requirements. The minimum requirements are: +> # jsonmodel_type, description, and group_code. grants_permissions is optional, these values can be looked up in +> # the ASpace database within the permissions table +> +> post_user_group = client.post("repositories/:repo_id:/groups", json=new_group) +> # Replace :repo_id: with the ArchivesSpace repository ID and new_group with the json data to create a new user +> # group +> +> print(post_user_group.json()) +> # Output: +> # {'status': 'Created', 'id': 23, 'lock_version': 0, 'stale': None, 'uri': '/repositories/2/groups/23', +> # 'warnings': []} +> ``` + +### DELETE requests + +Delete requests using the API permanently deletes any record, just like within the staff interface. Be careful! Make +sure you want to delete the entire record before doing so. If you want to delete parts of a record, for example some +notes or other fields, see [Updating existing records](####Updating existing records). + +To delete a record, retrieve the record's ArchivesSpace generated ID and use the `DELETE` command for SHELL or +`client.delete`if using the ArchivesSnake Python library. + +##### Shell Example + +> ```shell +> # Delete a user group within the SHELL +> curl -s -F password="admin" "http://localhost:8089/users/admin/login" +> # Replace "admin" with your password and "http://localhost:8089" with your ASpace API URL followed by +> # "/users/{your_username}/login" +> +> set SESSION="session_id" +> # If using a unix-like shell, replace set with export +> +> curl -H "X-ArchivesSpace-Session: $SESSION" \\ +> -X DELETE "http://localhost:8089/repositories/:repo_id:/groups/:group_id:" +> # Replace "http://localhost:8089" with your ASpace API URL, :repo_id: with the repository ID, and +> # :group_id: with the ID of the group you want to delete (usually found in the URL of the user group when +> # viewing in the staff interface). Deleting is permanent so make sure to test this first! +> +> # Output: {"status":"Deleted","id":47} +> ``` + +##### Python Example + +> ```python +> # Delete a user group from a repository using Python and ArchivesSnake +> from asnake.client import ASnakeClient # import the ArchivesSnake client +> +> client = ASnakeClient(baseurl="http://localhost:8089", username="admin", password="admin") +> # replace http://localhost:8089 with your ArchivesSpace API URL and admin for your username and password +> +> client.authorize() # authorizes the client +> +> delete_user_group = client.delete("repositories/:repo_id:/groups/:group_id:") +> # Replace :repo_id: with the ArchivesSpace repository ID and :group_id: with the ArchivesSpace ID of the +> # user group you want to delete. Deleting is permanent so make sure to test this first! +> +> print(delete_user_group.json()) +> # Output: {'status': 'Deleted', 'id': 23} +> ``` diff --git a/src/content/docs/es/architecture/api.md b/src/content/docs/es/architecture/api.md new file mode 100644 index 0000000..474cf47 --- /dev/null +++ b/src/content/docs/es/architecture/api.md @@ -0,0 +1,48 @@ +--- +title: API +description: Instructions for how to authenticate when trying to connect to a backend session, such as through the API, along with examples of common requests for getting and posting data. +--- + +:::note +See the [API section](/api/index) for more detailed documentation. +::: + +## Authentication + +Most actions against the backend require you to be logged in as a user +with the appropriate permissions. By sending a request like: + +``` +POST /users/admin/login?password=login +``` + +your authentication request will be validated, and a session token +will be returned in the JSON response for your request. To remain +authenticated, provide this token with subsequent requests in the +`X-ArchivesSpace-Session` header. For example: + +``` +X-ArchivesSpace-Session: 8e921ac9bbe9a4a947eee8a7c5fa8b4c81c51729935860c1adfed60a5e4202cb +``` + +## CRUD + +The ArchivesSpace API provides CRUD-style interactions for a number of +different "top-level" record types. Working with records follows a +fairly standard pattern: + +``` +# Get a paginated list of accessions from repository '123' +GET /repositories/123/accessions?page=1 + +# Create a new accession, returning the ID of the new record +POST /repositories/123/accessions +{... a JSON document satisfying JSONModel(:accession) here ...} + +# Get a single accession (returned as a JSONModel(:accession) instance) using the ID returned by the previous request +GET /repositories/123/accessions/456 + +# Update an existing accession +POST /repositories/123/accessions/456 +{... a JSON document satisfying JSONModel(:accession) here ...} +``` diff --git a/src/content/docs/es/architecture/archivesspace_architecture.svg b/src/content/docs/es/architecture/archivesspace_architecture.svg new file mode 100644 index 0000000..e7ded40 --- /dev/null +++ b/src/content/docs/es/architecture/archivesspace_architecture.svg @@ -0,0 +1,105 @@ +<svg width="100%" viewBox="0 0 680 560" xmlns="http://www.w3.org/2000/svg"> +<defs> +<marker id="arrow" viewBox="0 0 10 10" refX="8" refY="5" markerWidth="6" markerHeight="6" orient="auto-start-reverse"> +<path d="M2 1L8 5L2 9" fill="none" stroke="context-stroke" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/> +</marker> +</defs> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="40" y="22" width="160" height="42" rx="8" stroke-width="0.5" style="fill:rgb(8, 80, 65);stroke:rgb(93, 202, 165);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="120" y="43" text-anchor="middle" dominant-baseline="central" style="fill:rgb(159, 225, 203);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Logged-in users</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="265" y="22" width="150" height="42" rx="8" stroke-width="0.5" style="fill:rgb(68, 68, 65);stroke:rgb(180, 178, 169);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="340" y="43" text-anchor="middle" dominant-baseline="central" style="fill:rgb(211, 209, 199);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Internet</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="480" y="22" width="160" height="42" rx="8" stroke-width="0.5" style="fill:rgb(113, 43, 19);stroke:rgb(240, 153, 123);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="560" y="43" text-anchor="middle" dominant-baseline="central" style="fill:rgb(245, 196, 179);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Anonymous users</text> +</g> + +<line x1="200" y1="43" x2="265" y2="43" stroke="#0F6E56" stroke-width="1.5" fill="none" marker-end="url(#arrow)" style="fill:none;stroke:rgb(15, 110, 86);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<line x1="480" y1="43" x2="415" y2="43" stroke="#993C1D" stroke-width="1.5" fill="none" marker-end="url(#arrow)" style="fill:none;stroke:rgb(153, 60, 29);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<path d="M310,64 C300,108 105,96 105,138" fill="none" stroke="#0F6E56" stroke-width="1.5" marker-end="url(#arrow)" style="fill:none;stroke:rgb(15, 110, 86);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<path d="M370,64 C380,108 547,96 547,138" fill="none" stroke="#993C1D" stroke-width="1.5" marker-end="url(#arrow)" style="fill:none;stroke:rgb(153, 60, 29);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<rect x="15" y="115" width="650" height="145" rx="12" fill="none" stroke="var(--color-border-secondary)" stroke-width="0.5" stroke-dasharray="6 4" style="fill:none;stroke:rgba(222, 220, 209, 0.3);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-dasharray:6px, 4px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="290" y="104" width="100" height="22" rx="11" stroke-width="0.5" style="fill:rgb(12, 68, 124);stroke:rgb(133, 183, 235);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="340" y="115" text-anchor="middle" dominant-baseline="central" style="fill:rgb(181, 212, 244);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Frontend</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="20" y="138" width="170" height="58" rx="8" stroke-width="0.5" style="fill:rgb(12, 68, 124);stroke:rgb(133, 183, 235);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="105" y="155" text-anchor="middle" dominant-baseline="central" style="fill:rgb(181, 212, 244);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Staff UI</text> +<text x="105" y="173" text-anchor="middle" dominant-baseline="central" style="fill:rgb(133, 183, 235);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JRuby · Rails · jQuery</text> +</g> +<line x1="36" y1="192" x2="174" y2="192" stroke="#0F6E56" stroke-width="2" stroke-linecap="round" style="fill:rgb(0, 0, 0);stroke:rgb(15, 110, 86);color:rgb(251, 251, 254);stroke-width:2px;stroke-linecap:round;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="248" y="138" width="170" height="58" rx="8" stroke-width="0.5" style="fill:rgb(12, 68, 124);stroke:rgb(133, 183, 235);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="333" y="158" text-anchor="middle" dominant-baseline="central" style="fill:rgb(181, 212, 244);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Background jobs</text> +<text x="333" y="176" text-anchor="middle" dominant-baseline="central" style="fill:rgb(133, 183, 235);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JRuby · Ruby</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="462" y="138" width="170" height="58" rx="8" stroke-width="0.5" style="fill:rgb(12, 68, 124);stroke:rgb(133, 183, 235);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="547" y="155" text-anchor="middle" dominant-baseline="central" style="fill:rgb(181, 212, 244);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Public UI</text> +<text x="547" y="173" text-anchor="middle" dominant-baseline="central" style="fill:rgb(133, 183, 235);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JRuby · Rails · jQuery</text> +</g> +<line x1="478" y1="192" x2="616" y2="192" stroke="#993C1D" stroke-width="2" stroke-linecap="round" style="fill:rgb(0, 0, 0);stroke:rgb(153, 60, 29);color:rgb(251, 251, 254);stroke-width:2px;stroke-linecap:round;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<line x1="190" y1="167" x2="248" y2="167" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<path d="M105,196 C105,258 80,258 80,330" fill="none" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<path d="M333,196 C333,262 120,262 120,330" fill="none" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<path d="M547,196 C547,268 160,268 160,330" fill="none" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<rect x="15" y="310" width="650" height="115" rx="12" fill="none" stroke="var(--color-border-secondary)" stroke-width="0.5" stroke-dasharray="6 4" style="fill:none;stroke:rgba(222, 220, 209, 0.3);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-dasharray:6px, 4px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="290" y="299" width="100" height="22" rx="11" stroke-width="0.5" style="fill:rgb(8, 80, 65);stroke:rgb(93, 202, 165);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="340" y="310" text-anchor="middle" dominant-baseline="central" style="fill:rgb(159, 225, 203);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Backend</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="50" y="330" width="185" height="68" rx="8" stroke-width="0.5" style="fill:rgb(8, 80, 65);stroke:rgb(93, 202, 165);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="142" y="352" text-anchor="middle" dominant-baseline="central" style="fill:rgb(159, 225, 203);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">ArchivesSpace API</text> +<text x="142" y="369" text-anchor="middle" dominant-baseline="central" style="fill:rgb(93, 202, 165);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JRuby · Sinatra</text> +<text x="142" y="385" text-anchor="middle" dominant-baseline="central" style="fill:rgb(93, 202, 165);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JSONModel</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="435" y="330" width="195" height="68" rx="8" stroke-width="0.5" style="fill:rgb(8, 80, 65);stroke:rgb(93, 202, 165);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="532" y="352" text-anchor="middle" dominant-baseline="central" style="fill:rgb(159, 225, 203);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Indexer</text> +<text x="532" y="369" text-anchor="middle" dominant-baseline="central" style="fill:rgb(93, 202, 165);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JRuby · Sinatra</text> +<text x="532" y="385" text-anchor="middle" dominant-baseline="central" style="fill:rgb(93, 202, 165);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JSONModel</text> +</g> + +<text x="340" y="346" text-anchor="middle" style="fill:rgb(194, 192, 182);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:auto">monitors updates</text> +<line x1="435" y1="359" x2="235" y2="359" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<rect x="15" y="450" width="650" height="95" rx="12" fill="none" stroke="var(--color-border-secondary)" stroke-width="0.5" stroke-dasharray="6 4" style="fill:none;stroke:rgba(222, 220, 209, 0.3);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-dasharray:6px, 4px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="290" y="439" width="100" height="22" rx="11" stroke-width="0.5" style="fill:rgb(99, 56, 6);stroke:rgb(239, 159, 39);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="340" y="450" text-anchor="middle" dominant-baseline="central" style="fill:rgb(250, 199, 117);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Storage</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="50" y="462" width="185" height="58" rx="8" stroke-width="0.5" style="fill:rgb(99, 56, 6);stroke:rgb(239, 159, 39);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="142" y="482" text-anchor="middle" dominant-baseline="central" style="fill:rgb(250, 199, 117);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">MySQL</text> +<text x="142" y="500" text-anchor="middle" dominant-baseline="central" style="fill:rgb(239, 159, 39);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">Primary data store</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="435" y="462" width="195" height="58" rx="8" stroke-width="0.5" style="fill:rgb(99, 56, 6);stroke:rgb(239, 159, 39);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="532" y="482" text-anchor="middle" dominant-baseline="central" style="fill:rgb(250, 199, 117);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Apache Solr</text> +<text x="532" y="500" text-anchor="middle" dominant-baseline="central" style="fill:rgb(239, 159, 39);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">Search index · Java</text> +</g> + +<line x1="142" y1="398" x2="142" y2="462" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<line x1="532" y1="398" x2="532" y2="462" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +</svg> \ No newline at end of file diff --git a/src/content/docs/es/architecture/backend.md b/src/content/docs/es/architecture/backend.md new file mode 100644 index 0000000..e44a9ad --- /dev/null +++ b/src/content/docs/es/architecture/backend.md @@ -0,0 +1,422 @@ +--- +title: Backend +description: Describes the architecture behind the backend of ArchivesSpace, including the main.rb and rest.rb files for initiating ArchivesSpace and defining API mechanisms, controllers, models, nested records, relationships, agents, validation, optimistic concurrency control, and the permissions model. +--- + +The backend is responsible for implementing the ArchivesSpace API, and +supports the sort of access patterns shown in the previous section. +We've seen that the backend must support CRUD operations against a +number of different record types, and those records as expressed as +JSON documents produced from instances of JSONModel classes. + +The following sections describe how the backend fits together. + +## main.rb -- load and initialize the system + +The `main.rb` program is responsible for starting the ArchivesSpace +system: loading all controllers and models, creating +users/groups/permissions as needed, and preparing the system to handle +requests. + +When the system starts up, the `main.rb` program performs the +following actions: + +- Initializes JSONModel--triggering it to load all record schemas + from the filesystem and generate the classes that represent each + record type. +- Connects to the database +- Loads all backend models--the system's domain objects and + persistence layer +- Loads all controllers--defining the system's REST endpoints +- Starts the job scheduler--handling scheduled tasks such as backups + of the demo database (if used) +- Runs the "bootstrap ACLs" process--creates the admin user and + group if they don't already exist; creates the hidden global + repository; creates system users and groups. +- Fires the "backend started" notification to any registered + observers. + +In addition to handling the system startup, `main.rb` also provides +the following facilities: + +- Session handling--tracks authenticated backend sessions using the + token extracted from the `X-ArchivesSpace-Session` request header. +- Helper methods for accessing the current user and current session + of each request. + +## rest.rb -- Request and response handling for REST endpoints + +The `rest.rb` module provides the mechanism used to define the API's +REST endpoints. Each endpoint definition includes: + +- The URI and HTTP request method used to access the endpoint +- A list of typed parameters for that endpoint +- Documentation for the endpoint, each parameter, and each possible + response that may be returned +- Permission checks--predicates that the current user must satisfy + to be able to use the endpoint + +Each controller in the system consists of one or more of these +endpoint definitions. By using the endpoint syntax provided by +`rest.rb`, the controllers can declare the interface they provide, and +are freed of having to perform the sort of boilerplate associated +with request handling--check parameter types, coerce values from +strings into other types, and so on. + +The `main.rb` and `rest.rb` components work together to insulate the +controllers from much of the complexity of request handling. By the +time a request reaches the body of an endpoint: + +- It can be sure that all required parameters are present and of the + correct types. +- The body of the request has been fetched, parsed into the + appropriate type (usually a JSONModel instance--see below) and + made available as a request parameter. +- Any parameters provided by the client that weren't present in the + endpoint definition have been dropped. +- The user's session has been retrieved, and any defined access + control checks have been carried out. +- A connection to the database has been assigned to the request, and + a transaction has been opened. If the controller throws an + exception, the transaction will be automatically rolled back. + +## Controllers + +As touched upon in the previous section, controllers implement the +functionality of the ArchivesSpace API by registering one or more +endpoints. Each endpoint accepts a HTTP request for a given URI, +carries out the request and returns a JSON response (if successful) or +throws an exception (if something goes wrong). + +Each controller lives in its own file, and these can be found in the +`backend/app/controllers` directory. Since most of the request +handling logic is captured by the `rest.rb` module, controllers +generally don't do much more than coordinate the classes from the +model layer and send a response back to the client. + +### crud_helpers.rb -- capturing common CRUD controller actions + +Even though controllers are quite thin, there's still a lot of overlap +in their behaviour. Each record type in the system supports the same +set of CRUD operations, and from the controller's point of view +there's not much difference between an update request for an accession +and an update request for a digital object (for example). + +The `crud_helpers.rb` module pulls this commonality into a set of +helper methods that are invoked by each controller, providing methods +for the standard operations of the system. + +## Models + +The backend's model layer is where the action is. The model layer's +role is to bridge the gap between the high-level JSONModel objects +(complete with their properties, nested records, references to other +records, etc.) and the underlying relational database (via the Sequel +database toolkit). As such, the model layer is mainly concerned with +mapping JSONModel instances to database tables in a way that preserves +everything and allows them to be queried efficiently. + +Each record type has a corresponding model class, but the individual +model definitions are often quite sparse. This is because the +different record types differ in the following ways: + +- The set of properties they allow (and their types, valid values, + etc.) +- The types of nested records they may contain +- The types of relationships they may have with other record types + +The first of these--the set of allowable properties--is already +captured by the JSONModel schema definitions, so the model layer +doesn't have to enforce these restrictions. Each model can simply +take the values supplied by the JSONModel object it is passed and +assume that everything that needs to be there is there, and that +validation has already happened. + +The remaining two aspects _are_ enforced by the model layer, but +generally don't pertain to just a single record type. For example, an +accession may be linked to zero or more subjects, but so can several +other record types, so it doesn't make sense for the `Accession` model +to contain the logic for handling subjects. + +In practice we tend to see very little functionality that belongs +exclusively to a single record type, and as a result there's not much +to put in each corresponding model. Instead, models are generally +constructed by combining a number of mix-ins (Ruby modules) to satisfy +the requirements of the given record type. Features à la carte! + +### ASModel and other mix-ins + +At a minimum, every model includes the `ASModel` mix-in, which provides +base versions of the following methods: + +- `Model.create_from_json` -- Take a JSONModel instance and create a + model instance (a subclass of Sequel::Model) from it. Returns the + instance. +- `model.update_from_json` -- Update the target model instance with + the values from a given JSONModel instance. +- `Model.sequel_to_json` -- Return a JSONModel instance of the appropriate + type whose values are taken from the target model instance. + Model classes are declared to correspond to a particular JSONModel + instance when created, so this method can automatically return a + JSONModel instance of the appropriate type. + +These methods comprise the primary interface of the model layer: +virtually every mix-in in the model layer overrides one or all of +these to add behaviour in a modular way. + +For example, the 'notes' mix-in adds support for multiple notes to be +added to a record type--by mixing this module into a model class, that +class will automatically accept a JSONModel property called 'notes' +that will be stored and retrieved to and from the database as needed. +This works by overriding the three methods as follows: + +- `Model.create_from_json` -- Call 'super' to delegate the creation to + the next mix-in in the chain. When it returns the newly created + object, extract the notes from the JSONModel instance and attach + them to the model instance (saving them in the database). +- `model.update_from_json` -- Call 'super' to save the other updates + to the database, then replace any existing notes entries for the + record with the ones provided by the JSONModel. +- `Model.sequel_to_json` -- Call 'super' to have the next mix-in in + the chain create a JSONModel instance, then pull the stored notes + from the database and poke them into it. + +All of the mix-ins follow this pattern: call 'super' to delegate the +call to the next mix-in in the chain (eventually reaching ASModel), +then manipulate the result to implement the desired behaviour. + +### Nested records + +Some record types, like accessions, digital objects, and subjects, are +_top-level records_, in the sense that they are created independently +of any other record and are addressable via their own URI. However, +there are a number of records that can't exist in isolation, and only +exist in the context of another record. When one record can contain +instances of another record, we call them _nested records_. + +To give an example, the `date` record type is nested within an +`accession` record (among others). When the model layer is asked to +save a JSONModel instance containing nested records, it must pluck out +those records, save them in the appropriate database table, and ensure +that linkages are created within the database to allow them to be +retrieved later. + +This happens often enough that it would be tedious to write code for +each model to handle its nested records, so the ASModel mix-in +provides a declaration to handle this automatically. For example, the +`accession` model uses a definition like: + +```ruby +base.def_nested_record(:the_property => :dates, + :contains_records_of_type => :date, + :corresponding_to_association => :date) +``` + +When creating an accession, this declaration instructs the `Accession` +model to create a database record for each date listed in the "dates" +property of the incoming record. Each of these date records will be +automatically linked to the created accession. + +### Relationships + +A relationship is a link between two top-level records, where the link +is a separate, dynamically generated, model with zero or more +properties of its own. + +For example, the `Event` model can be related to several different +types of records: + +```ruby +define_relationship(:name => :event_link, + :json_property => 'linked_records', + :contains_references_to_types => proc {[Accession, Resource, ArchivalObject]}) +``` + +This declaration generates a custom class that models the relationship +between events and the other record types. The corresponding JSON +schema declaration for the `linked_records` property looks like this: + +```ruby +"linked_records" => { + "type" => "array", + "ifmissing" => "error", + "minItems" => 1, + "items" => { + "type" => "object", + "subtype" => "ref", + "properties" => { + "role" => { + "type" => "string", + "dynamic_enum" => "linked_event_archival_record_roles", + "ifmissing" => "error", + }, + "ref" => { + "type" => [{"type" => "JSONModel(:accession) uri"}, + {"type" => "JSONModel(:resource) uri"}, + {"type" => "JSONModel(:archival_object) uri"}, + ...], + "ifmissing" => "error" + }, + ... +``` + +That is, the property includes URI references to other records, plus +an additional "role" property to indicate the nature of the +relationship. The corresponding JSON might then be: + +```ruby +linked_records: [{ref: '/repositories/123/accessions/456', role: 'authorizer'}, ...] +``` + +The `define_relationship` definition automatically makes use of the +appropriate join tables in the database to store this relationship and +retrieve it later as needed. + +### Agents and `agent_manager.rb` + +Agents present a bit of a representational challenge. There are four +types of agents (person, family, corporate entity, software), and at a +high-level they are structured in the same way: each type can contain +one or more name records, zero or more contact records, and a number +of properties. Records that link to agents (via a relationship, for +example) can link to any of the four types so, in some sense, each +agent type implements a common `Agent` interface. + +However, the agent types differ in their details. Agents contain name +records, but the types of those name records correspond to the type of +the agent: a person agent contains a person name record, for example. +So, in spite of their similarities, the different agents need to be +modelled as separate record types. + +The `agent_manager` module captures the high-level similarities +between agents. Each agent model includes the agent manager mix-in: + +```ruby +include AgentManager::Mixin +``` + +and then defines itself declaratively by the provided class method: + +```ruby +register_agent_type(:jsonmodel => :agent_person, + :name_type => :name_person, + :name_model => NamePerson) +``` + +This definition sets up the properties of that agent. It creates: + +- a one_to_many relationship with the corresponding name + type of the agent. +- a one_to_many relationship with the agent_contact table. +- nested record definition which defines the names list of the agent + (so the list of names for the agent are automatically stored in + and retrieved from the database) +- a nested record definition for contact list of the agent. + +## Validations + +As records are added to and updated within the ArchivesSpace system, +they are validated against a number of rules to make sure they are +well-formed and don't conflict with other records. There are two +types of record validation: + +- Record-level validations check that a record is self-consistent: + that it contains all required fields, that its values are of the + appropriate type and format, and that its fields don't contradict + one another. +- System-level validations check that a record makes sense in a + broader context: that it doesn't share a unique identifier with + another record, and that any record it references actually exists. + +Record-level validations can be performed in isolation, while +system-level records require comparing the record to others in the +database. + +System-level validations need to be implemented in the database itself +(as integrity constraints), but record-level validations are often too +complex to be expressed this way. As a result, validations in +ArchivesSpace can appear in one or both of the following layers: + +- At the JSONModel level, validations are captured by JSON schema + documents. Where more flexibility is needed, custom validations + are added to the `common/validations.rb` file, allowing validation + logic to be expressed using arbitrary Ruby code. +- At the database level, validations are captured using database + constraints. Since the error messages yielded by these + constraints generally aren't useful for users, database + constraints are also replicated in the backend's model layer using + Sequel validations, which give more targeted error messages. + +As a general rule, record-level validations are handled by the +JSONModel validations (either through the JSON schema or custom +validations), while system-level validations are handled by the model +and the database schema. + +## Optimistic concurrency control + +Updating a record using the ArchivesSpace API is a two part process: + +```ruby +# Perform a `GET` against the desired record to fetch its JSON +# representation: + +GET /repositories/5/accessions/2 + +# Manipulate the JSON representation as required, and then `POST` +# it back to replace the original: + +POST /repositories/5/accessions/2 +``` + +If two people do this simultaneously, there's a risk that one person +would silently overwrite the changes made by the other. To prevent +this, every record is marked with a version number that it carries in +the `lock_version` property. When the system receives the updated +copy of a record, it checks that the version it carries is still +current; if the version number doesn't match the one stored in the +database, the update request is rejected and the user must re-fetch +the latest version before applying their update. + +## The ArchivesSpace permissions model + +The ArchivesSpace backend enforces access control, defining which +users are allowed to create, read, update, suppress and delete the +records in the system. The major actors in the permissions model are: + +- Repositories -- The main mechanism for partitioning the + ArchivesSpace system. For example, an instance might contain one + repository for each section of an organisation, or one repository + for each major collection. +- Users -- An entity that uses the system--often a person, but + perhaps a consumer of the ArchivesSpace API. The set of users is + global to the system, and a single user may have access to + multiple repositories. +- Records -- A unit of information in the system. Some records are + global (existing outside of any given repository), while some are + repository-scoped (belonging to a single repository). +- Groups -- A set of users _within_ a repository. Each group is + assigned zero or more permissions, which it confers upon its + members. +- Permissions -- An action that a user can perform. For example, A + user with the `update_accession_record` permission is allowed to + update accessions for a repository. + +To summarize, a user can perform an action within a repository if they +are a member of a group that has been assigned permission to perform +that action. + +### Conceptual trickery + +Since they're repository-scoped, groups govern access to repositories. +However, there are several record types that exist at the top-level of +the system (such as the repositories themselves, subjects and agents), +and the permissions model must be able to accommodate these. + +To get around this, we invent a concept: the "global" repository +conceptually contains the whole ArchivesSpace universe. As with other +repositories, the global repository contains groups, and users can be +made members of these groups to grant them permissions across the +entire system. One example of this is the "admin" user, which is +granted all permissions by the "administrators" group of the global +repository; another is the "search indexer" user, which can read (but +not update or delete) any record in the system. diff --git a/src/content/docs/es/architecture/database.md b/src/content/docs/es/architecture/database.md new file mode 100644 index 0000000..37609e0 --- /dev/null +++ b/src/content/docs/es/architecture/database.md @@ -0,0 +1,554 @@ +--- +title: Database +description: Describes the structure of the ArchivesSpace database, including a breakdown between the main, supporting, subrecord, relationship, enumerations, user-setting-permissions, job, and system tables. It also breaks down the specific fields present in the different tables. +--- + +The ArchivesSpace database stores all data that is created within an ArchivesSpace instance. As described in other sections of this documentation, the backend code - particularly the model layer and `ASModel_crud.rb` file - uses the `Sequel` database toolkit to bridge the gap between this underlying data and the JSON objects which are exchanged by the other components of the system. + +Often, querying the database directly is the most efficient and powerful way to retrieve data from ArchivesSpace. It is also possible to use raw SQL queries to create custom reports that can be run by users in the staff interface. Please consult the [Custom Reports](/customization/reports) section of this documentation for additional information on creating custom reports. + +<!-- .See this [plugin](link-to-plugin) for an example. Also --> + +It is recommended that ArchivesSpace be run against MySQL in production, not the included demo database. Instructions on setting up ArchivesSpace to run against MySQL are [here](/provisioning/mysql). + +The examples in this section are written for MySQL. There are many freely-available tutorials on the internet which can provide guidance to those unfamiliar with MySQL query syntax and the features of the language. + +**NOTE**: the documentation below is current through database schema version 129, application version 2.7.1. + +## Database Overview + +The ArchivesSpace database schema and it's mapping to the JSONModel objects used by the other parts of the system is defined by the files in the `common/schemas` and `backend/models` directories. The database itself is created via the `setup-database` script in the `scripts` directory. This script runs the migrations in the `common/db/migrations` directory. + +The tables in the ArchivesSpace database can be grouped into several general categories: + +- [Database Overview](#database-overview) +- [Main record tables](#main-record-tables) +- [Supporting record tables](#supporting-record-tables) +- [Subrecord tables](#subrecord-tables) +- [Relationship tables](#relationship-tables) +- [Enumerations](#enumerations) +- [User, setting, and permission tables](#user-setting-and-permission-tables) +- [Job tables](#job-tables) +- [System tables](#system-tables) +- [Parent-Child Relationships and Sequencing](#parent-child-relationships-and-sequencing) + - [Repository-scoped records](#repository-scoped-records) + - [Parent/child relationships](#parentchild-relationships) + - [Sequencing](#sequencing) +- [Boolean fields](#boolean-fields) +- [Read-Only Fields](#read-only-fields) + +One way to get a view of all tables and columns in your ArchivesSpace instance is to run the following query in a MySQL client: + +```sql +SELECT TABLE_SCHEMA + , TABLE_NAME + , COLUMN_NAME + , ORDINAL_POSITION + , IS_NULLABLE + , COLUMN_TYPE + , COLUMN_KEY +FROM INFORMATION_SCHEMA.COLUMNS +#change the following value to whatever your database is named +WHERE TABLE_SCHEMA Like 'archivesspace' +``` + +Additionally, a BETA version of an [ArchivesSpace data dictionary](https://github.com/archivesspace/data-dictionary-initial) has been created by members of the ArchivesSpace development team and the ArchivesSpace User Advisory Council Reports team. + +## Main record tables + +These tables hold data about the primary record types in ArchivesSpace. Main record types are distinguished from subrecords in that they have their own persistent URIs - corresponding to their database identifiers/primary keys - that are resolvable via the staff interface, public interface, and API. They are distinguished from supporting records in that they are the primary descriptive record types that users will interact with in the system. + +All of these records, except archival objects, can be created independently of any other record. Archival object records represent components of a larger entity, and so they must have a resource record as a root parent. See the [parent/child relationships](#parent-child-relationships-and-sequencing) section for more information about the representation of hierarchical relationships in the database. + +A few common fields occur in several main record tables. These similar fields are defined by the parent schemas in the `common/schemas` directory: + +| Column Name | Tables | +| ----------------------------------------------- | ---------------------------------------------------------------------------------------- | +| `title` | `accession`, `archival_object`, `digital_object`, `digital_object_component`, `resource` | +| `identifier`/`component_id`/`digital_object_id` | `accession`, `resource`/`archival_object`, `digital_object_component`/`digital_object` | +| `other_level` | `archival_object`, `resource` | +| `repository_processing_note` | `archival_object`, `resource` | + +<!-- Booleans --> + +All of the main records have a set of fields which store boolean values (`0` or `1`) that indicate whether the records are published in the public user interface, suppressed in the staff interface, or have some kind of applicable restriction. The exception to this is the `repository` table, which does not have a restriction boolean, but does have a `hidden` boolean. The `accession` table has multiple restriction-related booleans. See the section below for more information about boolean fields. + +Beginning in version 2.6.0, the main record tables (and some supporting records - see below) also contain fields which hold data about archival resource keys (ARKs) and human-readable URLs (slugs): + +| Column Name | Tables | +| ------------------ | ------------------------------------------------------------------------------------------------------ | +| `slug` | `accession`, `archival_object`, `digital_object`, `digital_object_component`, `repository`, `resource` | +| `external_ark_url` | `archival_object`, `resource` | + +Also stored in these and all other tables are enumeration values, foreign keys which correspond to database identifiers in the `enumeration_value` table, which stores controlled values. See enumeration section below for more detail. + +All subrecord data types - i.e. dates, extents, instances - relating to a main or supporting record are stored in their own tables and linked to main or supporting records via foreign key references in the subrecord tables. See subrecord section below for more detail. + +The remaining data in the main record tables is text, and is unique to each table: + +| TABLE_NAME | COLUMN_NAME | IS_NULLABLE | COLUMN_TYPE | COLUMN_KEY | +| -------------------------- | ------------------------------- | ----------- | ------------ | ---------- | +| `accession` | `content_description` | YES | text | | +| `accession` | `condition_description` | YES | text | | +| `accession` | `disposition` | YES | text | | +| `accession` | `inventory` | YES | text | | +| `accession` | `provenance` | YES | text | | +| `accession` | `general_note` | YES | text | | +| `accession` | `accession_date` | YES | date | | +| `accession` | `retention_rule` | YES | text | | +| `accession` | `access_restrictions_note` | YES | text | | +| `accession` | `use_restrictions_note` | YES | text | | +| `archival_object` | `ref_id` | NO | varchar(255) | MUL | +| `digital_object_component` | `label` | YES | varchar(255) | | +| `repository` | `repo_code` | NO | varchar(255) | UNI | +| `repository` | `name` | NO | varchar(255) | | +| `repository` | `org_code` | YES | varchar(255) | | +| `repository` | `parent_institution_name` | YES | varchar(255) | | +| `repository` | `url` | YES | varchar(255) | | +| `repository` | `image_url` | YES | varchar(255) | | +| `repository` | `contact_persons` | YES | text | | +| `repository` | `description` | YES | text | | +| `repository` | `oai_is_disabled` | YES | int | | +| `repository` | `oai_sets_available` | YES | text | | +| `resource` | `ead_id` | YES | varchar(255) | | +| `resource` | `ead_location` | YES | varchar(255) | | +| `resource` | `finding_aid_title` | YES | text | | +| `resource` | `finding_aid_filing_title` | YES | text | | +| `resource` | `finding_aid_date` | YES | varchar(255) | | +| `resource` | `finding_aid_author` | YES | text | | +| `resource` | `finding_aid_language_note` | YES | varchar(255) | | +| `resource` | `finding_aid_sponsor` | YES | text | | +| `resource` | `finding_aid_edition_statement` | YES | text | | +| `resource` | `finding_aid_series_statement` | YES | text | | +| `resource` | `finding_aid_note` | YES | text | | +| `resource` | `finding_aid_subtitle` | YES | text | | + +<!-- arguably top contsainers should be here, or digital objects should be in the supporting records --> + +## Supporting record tables + +Like the main record types listed above, supporting records can also be created independently of other records, and are addressable in the staff interface and API via their own URI. However, they are primarily meaningful via their many-to-many linkages to the main record types (and, sometimes, other supporting record types). These records typically provide additional information about, or otherwise enhance, the primary record types. A few supporting record types - for instance those in the `term` table - are used to enhance other supporting record types. + +| Supporting module tables | Linked to | +| --------------------------------- | --------------------------------------------------- | +| `agent_corporate_entity` | +| `agent_family` | +| `agent_person` | +| `agent_software` | +| `assessment` | +| `classification` | `accession`, `resource` | +| `classification_term` | `classification`, `accession`, `resource` | +| `container_profile` | `top_container` | +| `event` | +| `location` | +| `location_profile` | `location` | +| `subject` | `resource`, `archival_object` | +| `term` | `subject` | +| `top_container` | +| `vocabulary` | `subject`, `term` | +| `assessment_attribute_definition` | `assessment_attribute`, `assessment_attribute_note` | + +<!-- is this the appropriate place for the assessment attribute def? Vocabulary? --> + +## Subrecord tables + +<!-- link to ### Nested records section of the backend readme --> + +Subrecords must be associated with a main or supporting record - they cannot be created independently. As such, they do not have their own URIs, and can only be accessed via the API by retrieving the top-level record with which they are associated. In the staff interface these records are embedded within main or supporting record views. In the API subrecord data is contained in arrays within main or supporting records. + +The various subrecord types do have their own database tables. In addition to data specific to the subrecord type, the tables also contain foreign key columns which hold the database identifiers of main or supporting records. Subrecord tables must have a value in one of the foreign key fields. Some subrecords can have another subrecord as parent (for instance, the `sub_container` subrecord has `instance_id` as its foreign key column). + +Subrecords exist in a one-to-many relationship with their parent records, so a record's `id` may appear multiple times in a subrecord table (i.e. when there are two dates associated with a resource record). + +It is important to note that subrecords are deleted and recreated upon each save of the main or supporting record with which they are associated, regardless of whether the subrecord itself is modified. This means that the database identifier is deleted and reassigned upon each save. + +| Subrecord tables | Foreign keys | +| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `agent_contact` | `agent_person_id`, `agent_family_id`, `agent_corporate_entity_id`, `agent_software_id` | +| `date` | `accession_id`, `deaccession_id`, `archival_object_id`, `resource_id`, `event_id`, `digital_object_id`, `digital_object_component_id`, `related_agents_rlshp_id`, `agent_person_id`, `agent_family_id`, `agent_corporate_entity_id`, `agent_software_id`, `name_person_id`, `name_family_id`, `name_corporate_entity_id`, `name_software_id` | +| `extent` | `accession_id`, `deaccession_id`, `archival_object_id`, `resource_id`, `digital_object_id`, `digital_object_component_id` | +| `external_document` | `accession_id`, `archival_object_id`, `resource_id`, `subject_id`, `agent_person_id`, `agent_family_id`, `agent_corporate_entity_id`, `agent_software_id`, `rights_statement_id`, `digital_object_id`, `digital_object_component_id`, `event_id` | +| `external_id` | `subject_id`, `accession_id`, `archival_object_id`, `collection_management_id`, `digital_object_id`, `digital_object_component_id`, `event_id`, `location_id`, `resource_id` | +| `file_version` | `digital_object_id`, `digital_object_component_id` | +| `instance` | `resource_id`, `archival_object_id`, `accession_id` | +| `name_authority_id` | `name_person_id`, `name_family_id`, `name_software_id`, `name_corporate_entity_id` | +| `name_corporate_entity` | `agent_corporate_entity_id` | +| `name_family` | `agent_family_id` | +| `name_person` | `agent_person_id` | +| `name_software` | `agent_software_id` | +| `note` | `resource_id`, `archival_object_id`, `digital_object_id`, `digital_object_component_id`, `agent_person_id`, `agent_corporate_entity_id`, `agent_family_id`, `agent_software_id`, `rights_statement_act_id`, `rights_statement_id` | +| `note_persistent_id` | `note_id`, `parent_id` | +| `revision_statement` | `resource_id` | +| `rights_restriction` | `resource_id`, `archival_object_id` | +| `rights_restriction_type` | `rights_restriction_id` | +| `rights_statement` | `accession_id`, `archival_object_id`, `resource_id`, `digital_object_id`, `digital_object_component_id`, `repo_id` | +| `rights_statement_act` | `rights_statement_id` | +| `sub_container` | `instance_id` | +| `telephone` | `agent_contact_id` | +| `user_defined` | `accession_id`, `resource_id`, `digital_object_id` | +| `ark_name` | `archival_object_id`, `resource_id` | +| `assessment_attribute_note` | `assessment_id` | +| `assessment_attribute` | `assessment_id` | +| `lang_material` | `archival_object_id`, `resource_id`, `digital_object_id`, `digital_object_component_id` | +| `language_and_script` | `lang_material_id` | +| `collection_management` | `accession_id`, `resource_id`, `digital_object_id` | +| `location_function` | `location_id` | + +<!-- appropriate place for collection management and deaccession stuff? what about location function? all the rights statement stuff? Is there a specific thing that defines a subrecord as a subrecord? --> + +## Relationship tables + +These tables exist to enable linking between main records and supporting records. Relationship tables are necessary because, unlike subrecord tables, supporting record tables do not include foreign keys which link them to the main record tables. + +Most relationship tables have the `_rlshp` suffix in their names. They typically contain just the primary keys for the tables that are being linked, though a few tables also include fields that are specific to the relationship between the two record types. + +| Relationship/linking tables | Tables linked | +| ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `assessment_reviewer_rlshp` | `assessment` to `agent_person` | +| `assessment_rlshp` | `assessment` to `accession`, `archival_object`, `resource`, or `digital_object` | +| `classification_creator_rlshp` | `classification` to `agent_person`, `agent_family`, `agent_corporate_entity`, or `agent_software` | +| `classification_rlshp` | `classification` or `classification_term` to `resource` or `accession` | +| `classification_term_creator_rlshp` | `classification_term` to `agent_person`, `agent_family`, `agent_corporate_entity`, or `agent_software` | +| `event_link_rlshp` | `event` to `accession`, `resource`, `archival_object`, `digital_object`, `digital_object_component`, `agent_person`, `agent_family`, `agent_corporate_entity`, `agent_software`, or `top_container`. Also includes the `role_id` table, which can be joined with the `enumeration_value` table to return the event role (source, outcome, transfer, context) | +| `instance_do_link_rlshp` | `digital_object` to `instance` | +| `linked_agents_rlshp` | `agent_person`, `agent_software`, `agent_family`, or `agent_corporate_entity` to `accession`, `archival_object`, `digital_object`, `digital_object_component`, `event`, or `resource`. Also includes the `role_id` and `relator_id` tables, which can be joined with the `enumeration_value` table | +| `location_profile_rlshp` | `location` to `location_profile` | +| `owner_repo_rlshp` | `location` to `repository` | +| `related_accession_rlshp` | Links a row in the `accession` table to another row in the `accession` table. Also includes fields for `relator` and relationship type. | +| `related_agents_rlshp` | `agent_person`, `agent_corporate_entity`, `agent_software`, or `agent_family` to other agent tables, or two rows in the same agent table. Also includes fields for `relator` and `description`, and the type of relationship. | +| `spawned_rlshp` | `accession` to `resource`. This contains all linked accession data, even if the resource was not spawned from the accession record. | +| `subject_rlshp` | `subject` to `accession`, `archival_object`, `resource`, `digital_object`, or `digital_object_component` | +| `surveyed_by_rlshp` | `assessment` to `agent_person` | +| `top_container_housed_at_rlshp` | `top_container` to `location`. Also includes fields for `start_date`, `end_date`, `status`, and a free-text `note`. | +| `top_container_link_rlshp` | `top_container` to `sub_container` | +| `top_container_profile_rlshp` | `top_container` to `container_profile` | +| `subject_term` | `subject` to `term` | +| `linked_agent_term` | `linked_agents_rlshp` to `term` | + +<!-- is the assessment definition thing a linking table - it pretty much only has foreign keys + +Same question about one of the rights restriction tables - can't remember which one right now. + --> + +It is not always obvious which relationship tables will provide the desired results. For instance, to get a box list for a given resource record, enter the following query into a MySQL editor: + +```sql +SELECT DISTINCT CONCAT('/repositories/', resource.repo_id, '/resources/', resource.id) as resource_uri + , resource.identifier + , resource.title + , tc.barcode as barcode + , tc.indicator as box_number +FROM sub_container sc +JOIN top_container_link_rlshp tclr on tclr.sub_container_id = sc.id +JOIN top_container tc on tclr.top_container_id = tc.id +JOIN instance on sc.instance_id = instance.id +JOIN archival_object ao on instance.archival_object_id = ao.id +JOIN resource on ao.root_record_id = resource.id +#change to your desired resource id +WHERE resource.id = 4556 +``` + +Sometimes numerous relationship tables must be joined to retrieve the desired results. For instance, to get all boxes and folders for a given resource record, including any container profiles and locations, enter the following query into a MySQL editor: + +```sql +SELECT CONCAT('/repositories/', tc.repo_id, '/top_containers/', tc.id) as tc_uri + , CONCAT('/repositories/', resource.repo_id, '/resources/', resource.id) as resource_uri + , CONCAT('/repositories/', resource.repo_id) as repo_uri + , CONCAT('/repositories/', ao.repo_id, '/archival_objects/', ao.id) as ao_uri + , resource.identifier AS resource_identifier + , resource.title AS resource_title + , ao.display_string AS ao_title + , ev2.value AS level + , tc.barcode AS barcode + , cp.name AS container_profile + , tc.indicator AS container_num + , ev.value AS sc_type + , sc.indicator_2 AS sc_num +from sub_container sc +JOIN top_container_link_rlshp tclr on tclr.sub_container_id = sc.id +JOIN top_container tc on tclr.top_container_id = tc.id +LEFT JOIN top_container_profile_rlshp tcpr on tcpr.top_container_id = tc.id +LEFT JOIN container_profile cp on cp.id = tcpr.container_profile_id +LEFT JOIN top_container_housed_at_rlshp tchar on tchar.top_container_id = tc.id +JOIN instance on sc.instance_id = instance.id +JOIN archival_object ao on instance.archival_object_id = ao.id +JOIN resource on ao.root_record_id = resource.id +LEFT JOIN enumeration_value ev on ev.id = sc.type_2_id +LEFT JOIN enumeration_value ev2 on ev2.id = ao.level_id +#change to your desired resource id +WHERE resource.id = 4223 + +``` + + <!-- Mention the CONCAT function for creating URIs --> + +## Enumerations + +All controlled values used by the application - excluding tool-tips and frontend/public display values and the values that are stored a few of the supporting record tables (see below) - are stored in a table called `enumeration_values`. Controlled values are organized into a variety of parent enumerations (akin to a set of distinct controlled value lists) which are utilized by different record and subrecord types. Parent enumeration data is stored in the `enumeration` table and is linked by foreign key in the `enumeration_id` field in the `enumeration_value` table. In the record and subrecord tables, enumeration values appear as foreign keys in a variety of foreign key columns, usually identified by an `_id` suffix. + +ArchivesSpace comes with a standard set of controlled values, but most of these are modifiable by end-users via the staff interface and API. However, some values in the `enumeration_value` table are read-only - these values define the terminology and data types used in different parts of the application (i.e. the various note types). + +Enumeration IDs appear as foreign keys in a variety of database tables: + +| table_name | column_name | enumeration_name | +| -------------------------- | ---------------------------------- | -------------------------------------------------- | +| `accession` | `acquisition_type_id` | accession_acquisition_type | +| `accession` | `resource_type_id` | accession_resource_type | +| `agent_contact` | `salutation_id` | agent_contact_salutation | +| `archival_object` | `level_id` | archival_record_level | +| `collection_management` | `processing_priority_id` | collection_management_processing_priority | +| `collection_management` | `processing_status_id` | collection_management_processing_status | +| `collection_management` | `processing_total_extent_type_id` | extent_extent_type_id | +| `container_profile` | `dimension_units_id` | dimension_units | +| `date` | `calendar_id` | date_calendar | +| `date` | `certainty_id` | date_certainty | +| `date` | `date_type_id` | date_type | +| `date` | `era_id` | date_era | +| `date` | `label_id` | date_label | +| `deaccession` | `scope_id` | deaccession_scope | +| `digital_object` | `digital_oject_type_id` | digital_object_digital_object_type | +| `digital_object` | `level_id` | digital_object_level | +| `event` | `event_type_id` | event_event_type | +| `event` | `outcome_id` | event_outcome | +| `extent` | `extent_type_id` | extent_extent_type | +| `extent` | `portion_id` | extent_portion | +| `external_document` | `identifier_type_id` | rights_statement_external_document_identifier_type | +| `file_version` | `checksum_method_id` | file_version_checksum_methods | +| `file_version` | `file_format_name_id` | file_version_file_format_name | +| `file_version` | `use_statement_id` | file_version_use_statement | +| `file_version` | `xlink_actuate_attribute_id` | file_version_xlink_actuate_attribute | +| `file_version` | `xlink_show_attribute_id` | file_version_xlink_show_attribute | +| `instance` | `instance_type_id` | instance_instance_type | +| `language_and_script` | `language_id` | +| `language_and_script` | `script_id` | +| `location` | `temporary_id` | location_temporary | +| `location_function` | `location_function_type_id` | location_function_type | +| `location_profile` | `dimension_units_id` | dimension_units | +| `name_corporate_entity` | `rules_id` | name_rule | +| `name_corporate_entity` | `source_id` | name_source | +| `name_family` | `rules_id` | name_rule | +| `name_family` | `source_id` | name_source | +| `name_person` | `name_order_id` | name_person_name_order | +| `name_person` | `rules_id` | name_rule | +| `name_person` | `source_id` | name_source | +| `name_software` | `rules_id` | name_rule | +| `name_software` | `source_id` | name_source | +| `repository` | `country_id` | country_iso_3166 | +| `resource` | `finding_aid_description_rules_id` | resource_finding_aid_description_rules | +| `resource` | `finding_aid_language_id` | +| `resource` | `finding_aid_script_id` | +| `resource` | `finding_aid_status_id` | resource_finding_aid_status | +| `resource` | `level_id` | archival_record_level | +| `resource` | `resource_type_id` | resource_resource_type | +| `rights_restriction_type` | `restriction_type_id` | restriction_type | +| `rights_statement` | `jurisdiction_id` | +| `rights_statement` | `other_rights_basis_id` | rights_statement_other_rights_basis | +| `rights_statement` | `rights_type_id` | rights_statement_rights_type | +| `rights_statement` | `status_id` | +| `rights_statement_act` | `act_type_id` | rights_statement_act_type | +| `rights_statement_act` | `restriction_id` | rights_statement_act_restriction | +| `rights_statement_pre_088` | `ip_status_id` | rights_statement_ip_status | +| `rights_statement_pre_088` | `jurisdiction_id` | +| `rights_statement_pre_088` | `rights_type_id` | rights_statement_rights_type | +| `sub_container` | `type_2_id` | container_type | +| `sub_container` | `type_3_id` | container_type | +| `subject` | `source_id` | subject_source | +| `telephone` | `number_type_id` | telephone_number_type | +| `term` | `term_type_id` | subject_term_type | +| `top_container` | `type_id` | container_type | + +<!-- need to add some rlshp tables which have enums --> + +To translate the enumeration ID that appears in the record and subrecord tables, join the `enumeration_value` table. The table can be joined multiple times if there are multiple values to translate, but you must use an alias for each table. For example: + +```sql +SELECT CONCAT('/repositories/', ao.repo_id, '/archival_objects/', ao.id) as ao_uri + , ao.display_string as ao_title + , date.begin + , date.end + , ev.value as date_label + , ev2.value as date_type + , ev3.value as date_calendar +FROM archival_object ao +LEFT JOIN date on date.archival_object_id = ao.id +LEFT JOIN enumeration_value ev on ev.id = date.label_id +LEFT JOIN enumeration_value ev2 on ev2.id = date.date_type_id +LEFT JOIN enumeration_value ev3 on ev3.id = date.calendar_id +``` + +**NOTE**: `container_profile`, `location_profile`, and `assessment_attribute_definition` records are similar to the records in the `enumeration_value` table in that they store controlled values which are referenced by other parts of the system. However, they differ in that they have their own tables and are addressable via their own URIs. + +## User, setting, and permission tables + +These tables store user and permissions information, user/repository/global preferences, and RDE and custom report templates. + +| Table name | Description | +| ------------------------ | ------------------------------------------------------- | +| `custom_report_template` | Custom report templates | +| `default_values` | Default values settings | +| `group` | Data about permission groups created by each repository | +| `group_permission` | Links the permission table to the group table | +| `group_user` | Links the group table to the user table | +| `oai_config` | Configuration data for OAI-PMH harvesting | +| `permission` | All permission types that can be assigned to users | +| `preference` | User preference data | +| `rde_template` | RDE templates | +| `required_fields` | Contains repository-defined required fields | +| `user` | User data | + +## Job tables + +These tables store data related to background jobs, including imports. + +| Table name | Description | +| --------------------- | ---------------------------------------------------------- | +| `job` | All jobs which have been run in an ArchivesSpace instance. | +| `job_created_record` | Records created via background jobs | +| `job_input_file` | Data about input files used in background jobs | +| `job_modified_record` | Data about records modified via background jobs | + +## System tables + +These tables track actions taken against the database (i.e. edits and deletes), system events, session and authorization data, and database information. These tables are typically not referenced by any other table. + +| Table name | Description | +| ----------------- | --------------------------------------------------------------------------------------------------- | +| `active_edit` | Records being actively edited by a user. Read-only system table | +| `auth_db` | Authentication data for users. Read-only system table | +| `deleted_records` | Records deleted in the past 24 hours. Read-only system table | +| `notification` | Notifications stream. Read-only system table | +| `schema_info` | Contains the database schema version. Read-only system table. | +| `sequence` | The value corresponds to the number of children the archival object has - 1. Read-only system table | +| `session` | Recent session data. Read-only system table | +| `system_event` | System event data. Read-only system table | + +<!-- these are subrecords --> +<!-- | subnote_metadata | +| rights_statement_pre_088 | --> + +## Parent-Child Relationships and Sequencing + +### Repository-scoped records + +Many main and supporting records are scoped to a particular repository. In these tables the parent repository is identified by a foreign key which corresponds to the database identifier in the `repository` table: + +| Column name | Description | Example | Found in | +| ----------- | ---------------------------------------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `repo_id` | The database ID of the parent repository | `12` | `accession`, `archival_object`, `assessment`, `assessment_attribute_definition`, `classification`, `classification_term`, `custom_report_template`, `default_values`, `digital_object`, `digital_object_component`, `event`, `group`, `job`, `preference`, `required_fields`, `resource`, `rights_statement`, `top_container` | + +### Parent/child relationships + +Hierarchical relationships between other records are also expressed through foreign keys: + +| Column name | Description | Example | PK Tables | Found in | +| ---------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ | +| `root_record_id` | The database ID of the root parent record | `4566` | `resource`, `digital_object`, `classification` | `archival_object`, `digital_object_component`, `classification_term` | +| `parent_id` | The database ID of the immediate parent record. This is used to identify parent records which are of the same type as the child record (i.e. two archival object records). The value will be NULL if the only parent is the root record. | `1748121` | `archival_object`, `classification_term`, `digital_object_component` | `archival_object`, `classification_term`, `digital_object_component`, `note_persistent_id` | +| `parent_name` | The database ID or URI, and the record type of the immediate parent | `144@archival_object`, `root@/repositories/2/resources/2` | `resource`, `archival_object`, `classification`, `classification_term`, `digital_object`, `digital_object_component` | `archival_object`, `classification_term`, `digital_object_component` | + +Beginning with MySQL 8, you can recursively retrieve all parents of an archival object (or all archival objects linked to a resource) by running the following query: + +```sql +WITH RECURSIVE ao_path AS + (SELECT ao1.id + , ao1.display_string + , ao1.component_id + , ao1.parent_id + , ev.value as `ao_level` + , 1 as level + FROM archival_object ao1 + LEFT JOIN enumeration_value ev on ev.id = ao1.level_id + WHERE ao1.id = <your ao id> + <!-- to get all trees for a resource change to: WHERE ao1.root_record_id = <your root_record_id> --> + UNION ALL + SELECT ao2.id + , ao2.display_string + , ao2.component_id + , ao2.parent_id + , ev.value as `ao_level` + , ao_path.level + 1 as level + FROM ao_path + JOIN archival_object ao2 on ao_path.parent_id = ao2.id + LEFT JOIN enumeration_value ev on ev.id = ao2.level_id) + SELECT GROUP_CONCAT(CONCAT(display_string, ' ', ' (', CONCAT(UPPER(SUBSTRING(ao_level,1,1)),LOWER(SUBSTRING(ao_level,2))), ' ', IF(component_id is not NULL, CAST(component_id as CHAR), "N/A"), ')') ORDER BY level DESC SEPARATOR ' > ') as tree + FROM ao_path; + +``` + +To retrieve all children (MySQL 8+): + +To retrieve both parents and children (MySQL 8+): + +To retrieve all parents of a record in MySQL 5.7 and below, run the following query: + +```sql +SELECT (SELECT GROUP_CONCAT(CONCAT(display_string, ' (', ao_level, ')') SEPARATOR ' < ') as parent_path + FROM (SELECT T2.display_string as display_string + , ev.value as ao_level + FROM (SELECT @r AS _id + , @p := @r AS previous + , (SELECT @r := parent_id FROM archival_object WHERE id = _id) AS parent_id + , @l := @l + 1 AS lvl + FROM ((SELECT @r := 1749840, @p := 0, @l := 0) AS vars, + archival_object h) + WHERE @r <> 0 AND @r <> @p) AS T1 + JOIN archival_object T2 ON T1._id = T2.id + LEFT JOIN enumeration_value ev on ev.id = T2.level_id + WHERE T2.id != 1749840 + ORDER BY T1.lvl DESC) as all_parents) as p_path + , ao.display_string + , CONCAT('/repositories/', ao.repo_id, '/archival_objects/', ao.id) as uri +FROM archival_object ao +WHERE ao.id = 1749840 +``` + +To retrieve all children of a record (MysQL 5.7 and below): + +```sql + +``` + +### Sequencing + +The ordering of records in a `resource`, `classification`, or `digital_object` tree is determined by the `position` field. The position field is also used to order values in the `enumeration_value` and `assessment_attribute_definition` tables: + +| Column name | Description | Example | Found in | +| ----------- | -------------------------------------------------------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------- | +| `position` | The position of the archival object under the immediate parent | `168000` | `enumeration_value`, `assessment_attribute_definition`, `classification_term`, `digital_object_component`, `archival_object` | + +## Boolean fields + +Many records and subrecords include fields which contain integers (`0` or `1`) corresponding to boolean values. + +| Boolean fields | Description | Found in | +| -------------------- | ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `publish` | | `subnote_metadata`, `file_version`, `external_document`, `accession`, `classification`, `agent_person`, `agent_family`, `agent_software`, `agent_corporate_entity`, `classification_term`, `revision_statement`, `repository`, `note`, `digital_object`, `digital_object_component`, `archival_object`, `resource` | +| `suppressed` | | `accession`, `archival_object`, `assessment_reviewer_rlshp`, `assessment_rlshp`, `classification`, `classification_creator_rlshp`, `classification_rlshp`, `classification_term`, `classification_term_creator_rlshp`, `digital_object`, `digital_object_component`, `enumeration_value`, `event`, `event_link_rlshp`, `instance_do_link_rlshp`, `linked_agents_rlshp`, `location_profile_rlshp`, `owner_repo_rlshp`, `related_accession_rlshp`, `related_agents_rlshp`, `resource`, `spawned_rlshp`, `surveyed_by_rlshp`, `top_container_housed_at_rlshp`, `top_container_link_rlshp`, `top_container_profile_rlshp` | +| `restrictions_apply` | | `accession`, `archival_object` | + +<!-- NEED TO ADD the restriction field here - the resource and dig ob recs have it --> +<!-- also add the hidden field in repo and the multiple restrictions in accession --> +<!-- I think this is good to mention because these are editable via the API but also have their own endpoints. So they are a little different. Should also mention that they are bools in the API docs. --> + +## Read-Only Fields + +Several system generated, read-only fields appear across many tables. These include database identifiers, timestamps that track record creation and modification, and fields that record the username of the user that created and last modified the each record. + +| Most common read-only fields | Description | +| ------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `id` (primary key) | Database identifier for each record | +| `system_mtime` | The last time the record was modified by the system | +| `created_by` | The user that created a record | +| `last_modified_by` | The user that last modified a record | +| `user_mtime` | The time that a record was last modified by a user | +| `create_time` | The time that a record was created | +| `lock_version` | This field is incrementally updated each time a record is updated. This provides a method of tracking updates and managing near-simultaneous edits by different users. | +| `json_schema_version` | The JSON schema version | +| `aspace_relationship_position` | The position of a linked record in a list of other linked records | +| `is_slug_auto` | A boolean value that indicates whether a slug was auto-generated | +| `system_generated` | A boolean value that indicates whether a field was system-generated | +| `display_string` | A system-generated field which concatenates the title and date fields of an archival object record | + +**NOTE**: for subrecord tables these fields may hold unexpected data - because subrecords are deleted and recreated upon each save of a main or supporting record, their create and modification times will also be recreated and will not reflect the original creation date of the subrecord itself. For resource records, the timestamp only records the time that the resource itself was modified, not the last time any of its components were modified. + +<!-- ## Querying the ArchivesSpace Database --> diff --git a/src/content/docs/es/architecture/directories.md b/src/content/docs/es/architecture/directories.md new file mode 100644 index 0000000..8d1c026 --- /dev/null +++ b/src/content/docs/es/architecture/directories.md @@ -0,0 +1,90 @@ +--- +title: Directory structure +description: Provides short summaries of the different directories in the ArchivesSpace codebase. +--- + +ArchivesSpace is made up of several components that are kept in separate directories. + +## \_yard + +This directory contains the code for the documentation tool used to generate the github io pages here: http://archivesspace.github.io/archivesspace/ + +## backend + +This directory contains the code that handles the database and the API. + +## build + +This directory contains the code used to build the application. It includes the commands that are used to run the development servers, the test suites, and to build the releases. ArchivesSpace is a JRuby application and Apache Ant is used to build it. + +## clustering + +This directory contains code that can be used when clustering an ArchivesSpace installation. + +## common + +This directory contains code that is used across two or more of the components. It includes configuration options, database schemas and migrations, and translation files. + +## contribution_files + +This directory contains the documentation and PDFs of the license agreement files. + +## docs + +This directory contains documentation files that are included in a release. + +## frontend + +This directory contains the staff interface Ruby on Rails application. + +## indexer + +This directory contains the indexer Sinatra application. + +## jmeter + +This directory contains an example that can be used to set up Apache JMeter to load test functional behavior and measure performance. + +## launcher + +This directory contains the code that launches (starts, restarts, and stops) an ArchivesSpace application. + +## oai + +This directory contains the OAI-PMH Sinatra application. + +## plugins + +This directory contains ArchivesSpace Program Team supported plugins. + +## proxy + +This directory contains the Docker proxy code. + +## public + +This directory contains the public interface Ruby on Rails application. + +## reports + +This directory contains the reports code. + +## scripts + +This directory contains scripts necessary for building, deploying, and other ArchivesSpace tasks. + +## selenium + +This directory contains the selenium tests. + +## solr + +This directory contains the solr code. + +## stylesheets + +This directory contains XSL stylesheets used by ArchivesSpace. + +## supervisord + +This directory contains a tool that can be used to run the development servers. diff --git a/src/content/docs/es/architecture/frontend.md b/src/content/docs/es/architecture/frontend.md new file mode 100644 index 0000000..50e9665 --- /dev/null +++ b/src/content/docs/es/architecture/frontend.md @@ -0,0 +1,7 @@ +--- +title: Staff interface +--- + +This document provides an overview of the parts of the ArchivesSpace codebase which control the frontend/staff interface. For guidance on using the ArchivesSpace staff interface, consult the [ArchivesSpace Help Center](https://archivesspace.atlassian.net/wiki/spaces/ArchivesSpaceUserManual/overview) (ArchivesSpace members only). + +> Additional documentation needed diff --git a/src/content/docs/es/architecture/index.md b/src/content/docs/es/architecture/index.md new file mode 100644 index 0000000..786335d --- /dev/null +++ b/src/content/docs/es/architecture/index.md @@ -0,0 +1,25 @@ +--- +title: Architecture and components +description: Abbreviated description of how the different parts of ArchivesSpace interact with each other with links to each section. +--- + +ArchivesSpace is divided into several components: the backend, which +exposes the major workflows and data types of the system via a +REST API, a staff interface, a public interface, and a search system, +consisting of Solr and an indexer application. + +These components interact by exchanging JSON data. The format of this +data is defined by a class called JSONModel. + +- [Overview](./overview) +- [JSONModel -- a validated ArchivesSpace record](./jsonmodel) +- [The ArchivesSpace backend](./backend) +- [The ArchivesSpace staff interface](./frontend) +- [Background Jobs](./jobs) +- [Search indexing](./search) +- [The ArchivesSpace public user interface](./public) +- [OAI-PMH interface](./oai-pmh) +- [API](./api) +- [Database](./database) +- [Directory structure](./directories) +- [Dependencies](./languages) diff --git a/src/content/docs/es/architecture/jobs.md b/src/content/docs/es/architecture/jobs.md new file mode 100644 index 0000000..5e2ef01 --- /dev/null +++ b/src/content/docs/es/architecture/jobs.md @@ -0,0 +1,118 @@ +--- +title: Background jobs +description: Describes long running processes, called background jobs, in ArchivesSpace, as well as how they are structured using types, runners, and schemas. Additional guidance on setting jobs to run concurrently and how to add a new job type using a plugin. +--- + +ArchivesSpace provides a mechanism for long-running processes to run +asynchronously. These processes are called `Background Jobs`. + +## Managing Jobs in the Staff UI + +The `Create` menu has a `Background Job` option which shows a submenu of job +types that the current user has permission to create. (See below for more +information about job permissions and hidden jobs.) Selecting one of these +options will take the user to a form to enter any parameters required for the +job and then to create it. + +When a job is created it is placed in the `Background Job Queue`. Jobs in the +queue will be run in the order they were created. (See below for more +information about multiple threads and concurrent jobs.) + +The `Browse` menu has a `Background Jobs` option. This takes the user to a list +of jobs arranged by their status. The user can then view the details of a job, +and cancel it if they have permission. + +## Permissions + +A user must have the `create_job` permission to create a job. By default, this +permission is included in the `repository_basic_data_entry` group. + +A user must have the `cancel_job` permission to cancel a job. By default, this +permission is included in the `repository_managers` group. + +When a JobRunner registers it can specify additional create and cancel +permissions. (See below for more information) + +## Types, Runners and Schemas + +Each job has a type, and each type has a registered runner to run jobs of that +type and JSONModel schema to define its parameters. + +#### Registered JobRunners + +All jobs of a type are handled by a registered `JobRunner`. The job runner +classes are located here: + +``` +backend/app/lib/job_runners/ +``` + +It is possible to define additional job runners from a plugin. (See below for +more information about plugins.) + +A job runner class must subclass `JobRunner`, register to run one or more job +types, and implement a `#run` method for jobs that it handles. + +When a job runner registers for a job type, it can set some options: + +- `:hidden` + - Defaults to `false` + - If this is set then this job type will not be shown in the list of available job types. +- `:run_concurrently` + - Defaults to `false` + - If this is set to true then more than one job of this type can run at the same time. +- `:create_permissions` + - Defaults to `[]` + - A permission or list of permissions required, in addition to `create_job`, to create jobs of this type. +- `:cancel_permissions` + - Defaults to `[]` + - A permission or list of permissions required, in addition to `cancel_job`, to cancel jobs of this type. + +For more information about defining a job runner, see the `JobRunner` superclass: + +``` +backend/app/lib/job_runner.rb +``` + +#### JSONModel Schemas + +A job type also requires a JSONModel schema that defines the parameters to run a +job of the type. The schema name must be the same as the type that the runner +registers for. For example: + +``` +common/schemas/import_job.rb +``` + +This schema, for `JSONModel(:import_job)`, defines the parameters for running a +job of type `import_job`. + +## Concurrency + +ArchivesSpace can be configured to run more than one background job at a time. +By default, there will be two threads available to run background jobs. +The configuration looks like this: + +``` +AppConfig[:job_thread_count] = 2 +``` + +The `BackgroundJobQueue` will start this number of threads at start up. Those +threads will then poll for queued jobs and run them. + +When a job runner registers, it can set an option called `:run_concurrently`. +This is `false` by default. When set to `false` a job thread will not run a job +if there is already a job of that type running. The job will remain on the queue +and will be run when there are no longer any jobs of its type running. + +When set to `true` a job will be run when it comes to the front of the queue +regardless of whether there is a job of the same type running. + +## Plugins + +It is possible to add a new job type from a plugin. ArchivesSpace includes a +plugin that demonstrates how to do this: + +``` +plugins/jobs_example +``` diff --git a/src/content/docs/es/architecture/jsonmodel.md b/src/content/docs/es/architecture/jsonmodel.md new file mode 100644 index 0000000..9002c8b --- /dev/null +++ b/src/content/docs/es/architecture/jsonmodel.md @@ -0,0 +1,103 @@ +--- +title: JSONModel +description: Describes the rules and structure behind the JSONModel class, which expresses the rules for different types of archival records. JSONModel instances are the primary data interchange mechanism for ArchivesSpace. +--- + +The ArchivesSpace system is concerned with managing a number of +different archival record types. Each record can be expressed as a +set of nested key/value pairs, and associated with each record type is +a number of rules that describe what it means for a record of that +type to be valid: + +- some fields are mandatory, some optional +- some fields can only take certain types +- some fields can only take values from a constrained set +- some fields are dependent on other fields +- some record types can be nested within other record types +- some record types may be related to others through a hierarchy +- some record types form a relationship graph with other record + types + +The JSONModel class provides a common language for expressing these +rules that all parts of the application can share. There is a +JSONModel class instance for each type of record in the system, so: + +```ruby +JSONModel(:digital_object) +``` + +is a class that knows how to take a hash of properties and make sure +those properties conform to the specification of a Digital Object: + +```ruby +JSONModel(:digital_object).from_hash(myhash) +``` + +If it passes validation, a new JSONModel(:digital_object) instance is +returned, which provides accessors for accessing its values, and +facilities for round-tripping between JSON documents and regular Ruby +hashes: + +```ruby +obj = JSONModel(:digital_object).from_hash(myhash) + +obj.title # or obj['title'] +obj.title = 'a new title' # or obj['title'] = 'a new title' + +obj.\_exceptions # Validates the object and reports any issues + +obj.to_hash # Turn the JSONModel object back into a regular hash +obj.to_json # Serialize the JSONModel object into JSON +``` + +Much of the validation performed by JSONModel is provided by the JSON +schema definitions listed in the `common/schemas` directory. JSON +schemas provide a standard way of declaring which properties a record +may and may not contain, along with their types and other +restrictions. ArchivesSpace uses these schemas to capture the +validation rules defining each record type in a declarative and +relatively self-documenting fashion. + +JSONModel instances are the primary data interchange mechanism for the +ArchivesSpace system: the API consumes and produces JSONModel +instances (in JSON format), and much of the user interface's life is +spent turning forms into JSONModel instances and shipping them off to +the backend. + +## JSONModel::Client -- A high-level API for interacting with the ArchivesSpace backend + +To save the need for a lot of HTTP request wrangling, ArchivesSpace +ships with a module called JSONModel::Client that simplifies the +common CRUD-style operations. Including this module just requires +passing an additional parameter when initializing JSONModel: + +```ruby +JSONModel::init(:client_mode => true, :url => @backend_url) +include JSONModel +``` + +If you'll be working against a single repository, it's convenient to +set it as the default for subsequent actions: + +```ruby +JSONModel.set_repository(123) +``` + +Then, several additional JSONModel methods are available: + +```ruby +# As before, get a paginated list of accessions (GET) +JSONModel(:accession).all(:page => 1) + +# Create a new accession (POST) +obj = JSONModel(:accession).from_hash(:title => "A new accession", ...) +obj.save + +# Get a single accession by ID (GET) +obj = JSONModel(:accession).find(123) + +# Update an existing accession (POST) +obj = JSONModel(:accession).find(123) +obj.title = "Updated title" +obj.save +``` diff --git a/src/content/docs/es/architecture/languages.md b/src/content/docs/es/architecture/languages.md new file mode 100644 index 0000000..e36d138 --- /dev/null +++ b/src/content/docs/es/architecture/languages.md @@ -0,0 +1,18 @@ +--- +title: Dependencies +description: Lists the technical stack of the application, including programming languages and platforms. +--- + +ArchivesSpace components are constructed using several programming languages, platforms, and additional open source projects. + +## Languages + +The languages used are Java, JRuby, Ruby, JavaScript, and CSS. + +## Platforms + +The backend, OAI harvester, and indexer are Sinatra apps. The staff and public user interfaces are Ruby on Rails apps. + +## Additional open source projects + +The database used out of the box and for testing is Apache Derby. The database suggested for production is MySQL. The index platform is Apache Solr. diff --git a/src/content/docs/es/architecture/oai-pmh.md b/src/content/docs/es/architecture/oai-pmh.md new file mode 100644 index 0000000..b538aa3 --- /dev/null +++ b/src/content/docs/es/architecture/oai-pmh.md @@ -0,0 +1,130 @@ +--- +title: OAI-PMH interface +description: Describes how OAI-PMH is set up in ArchivesSpace and how to harvest data using OAI-PMH with example links and additional information. +--- + +A starter OAI-PMH interface for ArchivesSpace allowing other systems to harvest +your records is included in version 2.1.0. Additional features and functionality +will be added in later releases. + +By default, the OAI-PMH interface runs on port 8082. A sample request page is +available at http://localhost:8082/sample. (To access it, make sure that you +have set the AppConfig[:oai_proxy_url] appropriately.) + +The system provides responses to a number of standard OAI-PMH requests, +including GetRecord, Identify, ListIdentifiers, ListMetadataFormats, +ListRecords, and ListSets. Unpublished and suppressed records and elements are +not included in any of the OAI-PMH responses. + +Some responses require the URL parameter metadataPrefix. There are five +different metadata responses available: + +- EAD -- oai_ead (resources in EAD) +- Dublin Core -- oai_dc (archival objects and resources in Dublin Core) +- extended DCMI Terms -- oai_dcterms (archival objects and resources in DCMI Metadata Terms format) +- MARC -- oai_marc (archival objects and resources in MARC) +- MODS -- oai_mods (archival objects and resources in MODS) + +The EAD response for resources and MARC response for resources and archival +objects use the mappings from the built-in exporter for resources. The DC, +DCMI terms, and MODS responses for resources and archival objects use mappings +suggested by the community. + +Here are some example URLs and other information for these requests: + +**GetRecord** – needs a record identifier and metadataPrefix +Up to ArchivesSpace v3.5.1 OAI identifiers are in this format: + +`http://localhost:8082/oai?verb=GetRecord&identifier=oai:archivesspace//repositories/2/resources/138&metadataPrefix=oai_ead` + +Starting with ArchivesSpace v4.0.0 OAI identifiers are in the new format (notice the colon after the `oai:archivesspace` namespace part of the identifier): + +`http://localhost:8082/oai?verb=GetRecord&identifier=oai:archivesspace:/repositories/2/resources/138&metadataPrefix=oai_ead` + +see also: https://github.com/code4lib/ruby-oai/releases/tag/v1.0.0 + +**Identify** + +`http://localhost:8082/oai?verb=Identify` + +**ListIdentifiers** – needs a metadataPrefix + +`http://localhost:8082/oai?verb=ListIdentifiers&metadataPrefix=oai_dc` + +**ListMetadataFormats** + +`http://localhost:8082/oai?verb=ListMetadataFormats` + +**ListRecords** – needs a metadataPrefix + +`http://localhost:8082/oai?verb=ListRecords&metadataPrefix=oai_dcterms` + +**ListSets** + +`http://localhost:8082/oai?verb=ListSets` + +Harvesting the ArchivesSpace OAI-PMH server without specifying a set will yield +all published records across all repositories. +Predefined sets can be accessed using the set parameter. In order to retrieve +records from sets, include a set parameter in the URL and the DC metadataPrefix, +such as "&set=collection&metadataPrefix=oai_dc". These sets can be from +configured sets as shown above or from the following levels of description: + +- Class -- class +- Collection -- collection +- File -- file +- Fonds -- fonds +- Item -- item +- Other_Level -- otherlevel +- Record_Group -- recordgrp +- Series -- series +- Sub-Fonds -- subfonds +- Sub-Group -- subgrp +- Sub-Series -- subseries + +In addition to the sets based on level of description, you can define sets +based on repository codes and/or sponsors in the config/config.rb file: + +```ruby +AppConfig[:oai_sets] = { + 'repository_set' => { + :repo_codes => ['hello626'], + :description => "A set of one or more repositories", + }, + 'sponsor_set' => { + :sponsors => ['The_Sponsor'], + :description => "A set of one or more sponsors", + } +} +``` + +The interface implements resumption tokens for pagination of results. As an +example, the following URL format should be used to page through the results +from a ListRecords request: + +`http://localhost:8082/oai?verb=ListRecords&metadataPrefix=oai_ead` + +using the resumption token: + +`http://localhost:8082/oai?verb=ListRecords&resumptionToken=eyJtZXRhZGF0YV9wcmVmaXgiOiJvYWlfZWFkIiwiZnJvbSI6IjE5NzAtMDEtMDEgMDA6MDA6MDAgVVRDIiwidW50aWwiOiIyMDE3LTA3LTA2IDE3OjEwOjQxIFVUQyIsInN0YXRlIjoicHJvZHVjaW5nX3JlY29yZHMiLCJsYXN0X2RlbGV0ZV9pZCI6MCwicmVtYWluaW5nX3R5cGVzIjp7IlJlc291cmNlIjoxfSwiaXNzdWVfdGltZSI6MTQ5OTM2MTA0Mjc0OX0=` + +Note: you do not use the metadataPrefix when you use the resumptionToken + +The ArchivesSpace OAI-PMH server supports persistent deletes, so harvesters +will be notified of any records that were deleted since +they last harvested. + +Mixed content is removed from Dublin Core, dcterms, MARC, and MODS field outputs +in the OAI-PMH response (e.g., a scope note mapped to a DC description field +would not include `<p>`, `<abbr>`, `<address>`, `<archref>`, `<bibref>`, `<blockquote>`, +`<chronlist>`, `<corpname>`, `<date>`, `<emph>`, `<expan>`, `<extptr>`, `<extref>`, +`<famname>`, `<function>`, `<genreform>`, `<geogname>`, `<lb>`, `<linkgrp>`, `<list>`, +`<name>`, `<note>`, `<num>`, `<occupation>`, `<origination>`, `<persname>`, `<ptr>`, `<ref>`, `<repository>`, `<subject>`, `<table>`, `<title>`, `<unitdate>`, `<unittitle>`). + +The component level records include inherited data from superior hierarchical +levels of the finding aid. Element inheritance is determined by institutional +system configuration (editable in the config/config.rb file) as implemented for +the Public User Interface. + +ARKs have not yet been implemented, pending more discussion of how they should +be formulated. diff --git a/src/content/docs/es/architecture/overview.md b/src/content/docs/es/architecture/overview.md new file mode 100644 index 0000000..b4a7375 --- /dev/null +++ b/src/content/docs/es/architecture/overview.md @@ -0,0 +1,15 @@ +--- +title: Architecture Overview +description: The main components of ArchivesSpace and how they interact with each other and the end users. +--- + +ArchivesSpace is divided into several components: + +- the backend, which exposes the major workflows and data types of the system via a REST API, +- a staff interface, +- a public interface, +- a search system, consisting of Solr and an indexer application. + +These components interact by exchanging JSON data. The format of this data is defined by a class called JSONModel. + +![archivesspace_architecture](./archivesspace_architecture.svg) diff --git a/src/content/docs/es/architecture/public.md b/src/content/docs/es/architecture/public.md new file mode 100644 index 0000000..aa6419d --- /dev/null +++ b/src/content/docs/es/architecture/public.md @@ -0,0 +1,154 @@ +--- +title: Public user interface +description: Directions for configuration options for the ArchivesSpace Public User Interface, as well as explanation on inheritance of data in records. +--- + +The ArchivesSpace Public User Interface (PUI) provides a public +interface to your ArchivesSpace collections. In a default +ArchivesSpace installation it runs on port `:8081`. + +## Configuration + +The PUI is configured using the standard ArchivesSpace `config.rb` +file, with the relevant configuration options are prefixed with +`:pui_`. + +To see the full list of available options, see the file +[`https://github.com/archivesspace/archivesspace/blob/master/common/config/config-defaults.rb`](https://github.com/archivesspace/archivesspace/blob/master/common/config/config-defaults.rb) + +### Preserving Patron Privacy + +The **:block_referrer** key in the configuration file (default: **true**) determines whether the full referring URL is +transmitted when the user clicks a link to a website outside the web domain of your instance of ArchivesSpace. This +protects your patrons from tracking by that site. + +### Main Navigation Menu + +You can choose not to display one or more of the links on the main +(horizontal) navigation menu, either globally or by repository, if you +have more than one repository. You manage this through the +`:pui_hide` options in the `config/config.rb` file. + +### Repository Customization + +#### Display of "badges" on the Repository page + +You can configure which badges appear on the Repository page, both +globally or by repository. See the `:pui_hide` configuration options +for these too. + +### Activation of the "Request" button on archival object pages + +You can configure, both globally or by repository, whether the +"Request" button is active on archival object pages for objects that +don't have an associated Top Container. See the +`:pui_requests_permitted_for_containers_only` configuration option to +modify this. + +### I18n + +You can change the text and labels used by the PUI by editing the +locale files under the `locales/public` directory of your +ArchivesSpace distribution. + +### Addition of a "lead paragraph" + +You can also use the custom `.yml` files, described above, to add a +custom "lead paragraph" (including html markup) for one or more of +your repositories, keyed to the repository's code. + +For example, if your repository, `My Wonderful Repository` has a code of `MWR`, this is what you might see in the +custom `en.yml`: + +```yaml +en: + repos: + mwr: + lead_graph: This <strong>amazing</strong> repository has so much to offer you! +``` + +## Development + +To run a development server, the PUI follows the same pattern as the rest of ArchivesSpace. From your ArchivesSpace checkout: + +```shell + # Prepare all dependencies + build/run bootstrap + + # Run the backend development server (and Solr) + build/run backend:devserver + + # Run the indexer + build/run indexer + + # Finally, run the PUI itself + build/run public:devserver +``` + +## Inheritance + +### Three options for inheritance: + +- Directly inherit a value for a field – the record has no value for the field and you want the value in the field to display as if it were the record’s own [uncomment the inheritance section in the config, set desired field (property) to inherit_directly => true] +- Indirectly inherit a value for a field – the record has no value for the field and you want to display the value from a higher level, but precede it with a note that indicates that it comes from that higher level, such as "From the collection" [uncomment the inheritance section in the config, set desired field (property) to inherit_directly => false] +- Don’t display the field at all – the record has no value of its own for the field and you don’t want it to display at all [uncomment the inheritance section in the config, delete the lines for the desired field (property)] + +### Archival Inheritance + +With the new version of the Public Interface, all elements of description can be inherited. This is especially important since the PUI displays each level of description as its own webpage. + +Each element of description can be inherited either directly or indirectly. When an element is inherited directly, it will appear as if that element was attached directly to that archival object in the staff interface. When an element is inherited indirectly, it will appear on the lower-level of description in the public interface, but the inherited element will be preceded with a note indicating the level of the ancestor from which the note is inherited (e.g. From the Collection, or From the Sub-Series). In both cases, the element will only be inherited if it is missing from the archival object. Additionally, the element of description will only be inherited from the closest ancestor. In other words, if a top-level collection record has an access restrictions note, and a child-level series record has an an access restrictions note, but the sub-series child of that series record lacks an access restrictions note, then the sub-series record will inherit only the access restrictions note from its parent series record. + +Additionally, the identifier element in ArchivesSpace, which is better known as the Reference Code in ISAD-G and DACS, can be inherited in a composite manner. When inherited in a composite manner, the inherited elements will be concatenated together. In other words, an identifier at the item level could look like this: MSS 1. Series A. Item 1. Though the archival object has an identifier of "Item 1", a composite identifier is displayed since the series-level record to which the item belongs has an identifier of "Series A", which in turn also belongs to a collection-level record that has an identifier of "MSS 1". + +By default, the following elements are turned on for inheritance: + +- Title (direct inheritance) +- Identifier (indirect inheritance), but by default the identifier inherits from ancestor archival objects only; it does NOT include the resource identifier. + +Also, it is advised to inherit this element in a composite fashion once it is determined whether the level of description should or should not display as part of the identifier, which will depend upon local data-entry practices + +- Language code (direct inheritance, but it does NOT display anywhere in the interface currently; eventually, this could be used for faceting) +- Dates (direct inheritance) +- Extents (indirect inheritance) +- Creator (indirect inheritance) +- Access restrictions note (direct inheritance) +- Scope and contents note (indirect inheritance) +- Language of Materials note (indirect inheritance, but there seems to be a bug right now so that the Language notes always show up as being directly inherited. See AR-XXXX) + +See https://github.com/archivesspace/archivesspace/blob/master/common/config/config-defaults.rb#L296-L396 for more information and examples. + +Also, a video overview of this feature, which was recorded before development was finished, is available online: +https://vimeo.com/195457286 + +Composite Identifier Inheritance + +If you add the following three lines to your configuration file, re-start ArchivesSpace, and then let the indexer re-index your records, you can gain the benefit of composite identifiers: + +```ruby +AppConfig[:record_inheritance][:archival_object][:composite_identifiers] = { +:include_level => true, +:identifier_delimiter => '. ' +} +``` + +To add extra fields, such as subjects you can add the following: + +```ruby +inherited_fields_extras = [ + { + code: 'subjects', + property: 'subjects', + inherit_if: proc { |json| json.select { |j| true } }, + inherit_directly: false, + }, +] +``` + +When you set include_level to true, that means the archival object level will be included in the identifier so that you don't have to repeat that data. For example, if the level of description is "Series" and the archival object identifier is "1", and the parent resource identifier is "MSS 1", then the composite identifier would display as "MSS 1. Series 1" at the series 1 level, and any descendant record. If you set include_level to false, then the display would be "MSS 1. 1" + +### License + +ArchivesSpace is released under the [Educational Community License, +version 2.0](http://opensource.org/licenses/ecl2.php). See the +[COPYING](https://github.com/archivesspace/archivesspace/blob/master/COPYING) file for more information. diff --git a/src/content/docs/es/architecture/search.md b/src/content/docs/es/architecture/search.md new file mode 100644 index 0000000..6320831 --- /dev/null +++ b/src/content/docs/es/architecture/search.md @@ -0,0 +1,46 @@ +--- +title: Search indexing +description: Explanation of how ArchivesSpace uses Solr for indexing added/updated/deleted records and the differences between the periodic and real-time modes of indexing records. +--- + +The ArchivesSpace system uses Solr for its full-text search. As +records are added/updated/deleted by the backend, the corresponding +changes are made to the Solr index to keep them (roughly) +synchronized. + +Keeping the backend and Solr in sync is the job of the "indexer", a +separate process that runs in the background and watches for record +updates. The indexer operates in two modes simultaneously: + +- The periodic mode polls the backend to get a list of records that + were added/modified/deleted since it last checked. These changes + are propagated to the Solr index. This generally happens every 30 + to 60 seconds (and is configurable). +- The real-time mode responds to updates as they happen, applying + changes to Solr as soon as they're applied to the backend. This + aims to reflect updates within the search indexes in milliseconds + or seconds. + +The two modes of operation overlap somewhat, but they serve different +purposes. The periodic mode ensures that records are never missed due +to transient failures, and will bring the indexes up to date even if +the indexer hasn't run for quite some time--even creating them from +scratch if necessary. This mode is also used for indexing updates +made by bulk import processes and other updates that don't need to be +reflected in the indexes immediately. + +The real-time indexer mode attempts to apply updates to the index much +more quickly. Rather than polling, it performs a `GET` request +against the `/update-feed` endpoint of the backend. This endpoint +returns any records that were updated since the last time it was asked +and, most importantly, it leaves the request hanging if no records +have changed. + +By calling this endpoint in a loop, the real-time indexer spends most +of its time sitting around waiting for something to happen. The +moment a record is updated, the already-pending request to the +`/update-feed` endpoint yields the updated record, which is sent to +Solr and indexed immediately. This avoids the delays associated with +polling and keeps indexing latency low where it matters. For example, +newly created records should appear in the browse list by the time a +user views it. diff --git a/src/content/docs/es/customization/authentication.md b/src/content/docs/es/customization/authentication.md new file mode 100644 index 0000000..e68959a --- /dev/null +++ b/src/content/docs/es/customization/authentication.md @@ -0,0 +1,139 @@ +--- +title: Additional authentication +description: Instructions on how to install and configure a custom authentication handler via a plugin. +--- + +ArchivesSpace supports LDAP-based authentication out of the box, but you can +authenticate against other password-based user directories by defining your own +authentication handler, creating a plug-in, and configuring your ArchivesSpace +instance to use it. If you would rather not have to create your own handler, +there is a [plugin](https://github.com/lyrasis/aspace-oauth) available that uses OAUTH user authentication that you can add +to your ArchivesSpace installation. + +## Creating a new authentication handler class to use in a plug-in + +An authentication handler is just a class that implements a couple of +key methods: + +- `initialize(opts)` -- An object constructor which receives the + configuration block specified in the system's configuration. +- `name` -- A zero-argument method which just returns a string that + identifies the instance of your handler. The format of this + string isn't important: it just gets stored as a user attribute + (in the ArchivesSpace database) to make it possible to tell which + authentication source a user last successfully authenticated + against. +- `authenticate(username, password)` -- a method which checks + whether `password` is the correct password for `username`. If the + password is correct, returns an instance of `JSONModel(:user)`. + Otherwise, returns `nil`. + +A new instance of your handler will be created for each login attempt, +so there's no need to handle concurrency in your implementation. + +Your `authenticate` method can do whatever is required to check that +the provided password is correct, with the only constraint being that +it must return either `nil` or a `JSONModel(:user)` instance. + +The `JSONModel(:user)` class (whose JSON schema is defined in +`common/schemas/user.rb`) defines the set of properties that the +system needs for a user. When you return a `JSONModel(:user)` object, +its values will be used to create an ArchivesSpace user (if a user by +that name didn't exist already), or update the existing user (if they +were already known). + +**Note**: `The JSONModel(:user)` class validates the values you give it +against its JSON schema and throws an `JSONModel::ValidationException` +if anything isn't right. If this happens within your handler, the +exception will be logged and the authentication request will fail. + +### A skeleton implementation + +Suppose you already have a database with a table containing users that +should be able to log in to ArchivesSpace. Below is a sketch of an +authentication handler that will connect to this database and use it +for authentication. + +```ruby +# For this example we'll use the Sequel database toolkit. Note that +# this isn't necessary--you could use whatever database library you +# like here. +require 'sequel' + +class MyDatabaseAuth + + # For easy access to the JSONModel(:user) class + include JSONModel + + + def initialize(definition) + # Store the database connection details for use at + # authentication time. + @db_url = definition[:db_url] or raise "Need a value for :db_url" + end + + + # Just for informational purposes. Return a string containing our + # database URL. + def name + "MyDatabaseAuth - #{@db_url}" + end + + + def authenticate(username, password) + # Open a connection to the database + Sequel.connect(@db_url) do |db| + + # Check whether we have an entry for the given username + # and password in our database's "users" table + user = db[:users].filter(:username => username, + :password => password). + first + + if !user + # The user couldn't be found, or their password was wrong. + # Authentication failed. + return nil + end + + # Build and return a JSONModel(:user) instance from fields in the database + JSONModel(:user).from_hash(:username => username, + :name => user[:user_full_name]) + + end + end + +end +``` + +In order to use your new authentication handler, you'll need to add it to the plug-in +architecture in ArchivesSpace and enable it. Create a new directory, called our_auth +perhaps, in the plugins directory of your ArchivesSpace installation. Inside +that directory create this directory hierarchy `backend/model/` and place the +new class file there. Next, configure the new handler. + +## Modifying your configuration + +To have ArchivesSpace invoke your new authentication handler, just add +a new entry to the `:authentication_sources` configuration block in the +`config/config.rb` file. + +A configuration for the above example might be as follows: + +```ruby +AppConfig[:authentication_sources] = [{ + :model => 'MyDatabaseAuth', + :db_url => 'jdbc:mysql://localhost:3306/somedb?user=myuser&password=mypassword', + }] +``` + +## Add the plug-in to the list of plug-ins already enabled + +In the `config/config.rb` file, find the setting of AppConfig[:plugins] and add +a reference to the new plug-in there. For example, if you named it our_auth, the +AppConfig[:plugins] setting may look something like this: + +AppConfig[:plugins] = ['local', 'hello_world', 'our_auth'] + +Restart your ArchivesSpace installation and you should now see authentication +requests hitting your new handler. diff --git a/src/content/docs/es/customization/bower.md b/src/content/docs/es/customization/bower.md new file mode 100644 index 0000000..1197f7f --- /dev/null +++ b/src/content/docs/es/customization/bower.md @@ -0,0 +1,68 @@ +--- +title: Managing frontend assets with Bower +description: Instructions on how to add static assests to the core project. +--- + +This is aimed at developers and applies to the 'frontend' application only. + +If you wish to add static assets to the core project (i.e., javascript, css, +less files) please use `bower` to add and install them so we know what's what +and when to upgrade. + +If you wish to do a good deed for ArchivesSpace you can track down the source +of any vendor assets not included in bower.json and get them updated and +installed according to this protocol. + +## General Setup + +### Step 1: install npm + +On OSX, for example: + +```shell +brew install npm +``` + +### Step 2: install Bower + +```shell +npm install bower -g +``` + +### Step 3: install components + +```shell +bower install +``` + +## Adding a static asset to ASpace Frontend (Staff UI) + +### Step 1: add the component + +```shell +bower install <PACKAGE NAME> --save +``` + +### Step 2: map Bower > Rails + + Edit the bower.json file to map the assets you want from bower_components + to assets. See examples in bower.json + This is kind of a hack to workaround: + https://github.com/blittle/bower-installer/issues/75 + +### Step 3: Install assets + +```shell +alias npm-exec='PATH=$(npm bin):$PATH' +npm-exec bower-installer +``` + +### Step 4: Check assets in + +Check the installed assets into Git. We version control bower.json and the +installed files, but not the bower_components directory. + +### Production! + +Don't forget - if you are adding assets that don't have a .js extension, you +need to add them to frontend/config/environments/production.rb diff --git a/src/content/docs/es/customization/configuration.md b/src/content/docs/es/customization/configuration.md new file mode 100644 index 0000000..ef98c89 --- /dev/null +++ b/src/content/docs/es/customization/configuration.md @@ -0,0 +1,1249 @@ +--- +title: Configuration +description: Lists all available configuration options available within the config/config.rb file, including configuration names, values, and suggestions for setup. +--- + +The primary configuration for ArchivesSpace is done in the config/config.rb +file. By default, this file contains the default settings, which are indicated +by commented out lines ( indicated by the "#" in the file ). You can adjust these +settings by adding new lines that change the default and restarting +ArchivesSpace. Be sure that your new settings are not commented out +( i.e. do NOT start with a "#" ), otherwise the settings will not take effect. + +## Commonly changed settings + +### Database config + +#### :db_url + +Set your database name and credentials. The default specifies that the embedded database should be used. +It is recommended to use a MySQL database instead of the embedded database. +For more info, see [Using MySQL](/provisioning/mysql) + +This is an example of specifying MySQL credentials: + +`AppConfig[:db_url] = "jdbc:mysql://127.0.0.1:3306/aspace?useUnicode=true&characterEncoding=UTF-8&user=as&password=as123"` + +#### :db_max_connections + +Set the maximum number of database connections used by the application. +Default is derived from the number of indexer threads. + +`AppConfig[:db_max_connections] = proc { 20 + (AppConfig[:indexer_thread_count] * 2) }` + +### URLs for ArchivesSpace components + +Set the ArchivesSpace backend port. The backend listens on port 8089 by default. + +`AppConfig[:backend_url] = "http://localhost:8089"` + +Set the ArchivesSpace staff interface (frontend) port. The staff interface listens on port 8080 by default. + +`AppConfig[:frontend_url] = "http://localhost:8080"` + +Set the ArchivesSpace public interface port. The public interface listens on port 8081 by default. + +`AppConfig[:public_url] = "http://localhost:8081"` + +Set the ArchivesSpace OAI server port. The OAI server listens on port 8082 by default. + +`AppConfig[:oai_url] = "http://localhost:8082"` + +Set the ArchivesSpace Solr index port. The Solr server listens on port 8090 by default. + +`AppConfig[:solr_url] = "http://localhost:8090"` + +Set the ArchivesSpace indexer port. The indexer listens on port 8091 by default. + +`AppConfig[:indexer_url] = "http://localhost:8091"` + +Set the ArchivesSpace API documentation port. The API documentation listens on port 8888 by default. + +`AppConfig[:docs_url] = "http://localhost:8888"` + +### Enabling ArchivesSpace components + +Enable or disable specific componenets by setting the following settings to true or false (defaults to true): + +```ruby +AppConfig[:enable_backend] = true +AppConfig[:enable_frontend] = true +AppConfig[:enable_public] = true +AppConfig[:enable_solr] = true +AppConfig[:enable_indexer] = true +AppConfig[:enable_docs] = true +AppConfig[:enable_oai] = true +``` + +### Application logging + +By default, all logging will be output on the screen while the archivesspace command +is running. When running as a daemon/service, this is put into a file in +`logs/archivesspace.out`. You can route log output to a different file per component by changing the log value to +a filepath that archivesspace has write access to. + +You can also set the logging level for each component. Valid values are: + +- `debug` (everything) +- `info` +- `warn` +- `error` +- `fatal` (severe only) + +#### `AppConfig[:frontend_log]` + +File for log output for the frontend (staff interface). Set to "default" to +route log output to archivesspace.out. + +#### `#AppConfig[:frontend_log_level]` + +Logging level for the frontend. + +#### `AppConfig[:backend_log]` + +File for log output for the backend. Set to "default" to +route log output to archivesspace.out. + +#### `#AppConfig[:backend_log_level]` + +Logging level for the backend. + +#### `AppConfig[:pui_log]` + +File for log output for the public UI. Set to "default" to +route log output to archivesspace.out. + +#### `#AppConfig[:pui_log_level]` + +Logging level for the public UI. + +#### `AppConfig[:indexer_log]` + +File for log output for the indexer. Set to "default" to +route log output to archivesspace.out. + +#### `#AppConfig[:indexer_log_level]` + +Logging level for the indexer. + +### Database logging + +#### `AppConfig[:db_debug_log]` + +Set to true to log all SQL statements. +Note that this will have a performance impact! + +`AppConfig[:db_debug_log] = false` + +#### `AppConfig[:mysql_binlog]` + +Set to true if you have enabled MySQL binary logging. + +`AppConfig[:mysql_binlog] = false` + +### Solr backups + +#### `AppConfig[:solr_backup_schedule]` + +Set Solr back up schedule. By default, Solr backups will run at midnight. See https://crontab.guru/ for +information about the schedule syntax. + +`AppConfig[:solr_backup_schedule] = "0 * * * *"` + +#### `AppConfig[:solr_backup_number_to_keep]` + +Number of Solr backups to keep (default = 1) + +`AppConfig[:solr_backup_number_to_keep] = 1` + +#### `AppConfig[:solr_backup_directory]` + +Directory to store Solr backups. + +`AppConfig[:solr_backup_directory] = proc { File.join(AppConfig[:data_directory], "solr_backups") }` + +### Default Solr params + +#### `AppConfig[:solr_params]` + +Add default solr params. + +A simple example: use AND for search: + +`AppConfig[:solr_params] = { "q.op" => "AND" }` + +A more complex example: set the boost query value (bq) to boost the relevancy +for the query string in the title, set the phrase fields parameter (pf) to boost +the relevancy for the title when the query terms are in close proximity to each +other, and set the phrase slop (ps) parameter for the pf parameter to indicate +how close the proximity should be: + +```ruby +AppConfig[:solr_params] = { + "bq" => proc { "title:\"#{@query_string}\"*" }, + "pf" => 'title^10', + "ps" => 0, +} +``` + +### Language + +#### `AppConfig[:locale]` + +Set the application's language (see the .yml files in +https://github.com/archivesspace/archivesspace/tree/master/common/locales +for a list of available locale codes). Default is English (:en): + +`AppConfig[:locale] = :en` + +### Plugin registration + +#### `AppConfig[:plugins]` + +Plug-ins to load. They will load in the order specified. + +`AppConfig[:plugins] = ['local', 'lcnaf']` + +### Thread count + +#### `AppConfig[:job_thread_count]` + +The number of concurrent threads available to run background jobs. +Introduced because long running jobs were blocking the queue. +Resist the urge to set this to a big number! + +`AppConfig[:job_thread_count] = 2` + +### OAI configuration options + +**NOTE: As of version 2.5.2, the following parameters (oai_repository_name, oai_record_prefix, and oai_admin_email) have been deprecated. They should be set in the Staff User Interface. To set them, select the System menu in the Staff User Interface and then select Manage OAI-PMH Settings. These three settings are at the top of the page in the General Settings section. These settings will be completely removed from the config file when version 2.6.0 is released.** + +#### `AppConfig[:oai_repository_name]` + +`AppConfig[:oai_repository_name] = 'ArchivesSpace OAI Provider'` + +#### `AppConfig[:oai_record_prefix]` + +`AppConfig[:oai_record_prefix] = 'oai:archivesspace'` + +#### `AppConfig[:oai_admin_email]` + +`AppConfig[:oai_admin_email] = 'admin@example.com'` + +#### `AppConfig[:oai_sets]` + +In addition to the sets based on level of description, you can define OAI Sets +based on repository codes and/or sponsors as follows: + +```ruby +AppConfig[:oai_sets] = { + 'repository_set' => { + :repo_codes => ['hello626'], + :description => "A set of one or more repositories", + }, + + 'sponsor_set' => { + :sponsors => ['The_Sponsor'], + :description => "A set of one or more sponsors", + }, +} +``` + +## Other less commonly changed settings + +### Default admin password + +#### `AppConfig[:default_admin_password]` + +Set default admin password. Default password is "admin". + +`#AppConfig[:default_admin_password] = "admin"` + +### Data directories + +#### `AppConfig[:data_directory]` + +If you run ArchivesSpace using the standard scripts (archivesspace.sh, +archivesspace.bat or as a Windows service), the value of :data_directory is +automatically set to be the "data" directory of your ArchivesSpace +distribution. You don't need to change this value unless you specifically +want ArchivesSpace to put its data files elsewhere. + +`AppConfig[:data_directory] = File.join(Dir.home, "ArchivesSpace")` + +#### `AppConfig[:backup_directory]` + +Directory to store automated backups when using the embedded demo database (Apache Derby instead of MySQL). This defaults to `demo_db_backups` within the `data` directory. + +`AppConfig[:backup_directory] = proc { File.join(AppConfig[:data_directory], "demo_db_backups") }` + +### Solr defaults + +#### `AppConfig[:solr_indexing_frequency_seconds]` + +The number of seconds between each run of the SUI and PUI indexers. The indexers will perform and indexing cycle every configured number of seconds. + +`AppConfig[:solr_indexing_frequency_seconds] = 30` + +#### `AppConfig[:solr_facet_limit]` + +The maximum number of distinct facet terms Solr will include in the response for a given field. + +`AppConfig[:solr_facet_limit] = 100` + +#### `AppConfig[:default_page_size]` + +The number of records included in each page in all paginated backend api responses. +`AppConfig[:default_page_size] = 10` + +#### `AppConfig[:max_page_size]` + +Requests to the backend api can define a custom page_size param. This is the maximum allowed page size. +`AppConfig[:max_page_size] = 250` + +### Cookie prefix + +#### `AppConfig[:cookie_prefix]` + +A prefix added to cookies used by the application. +Change this if you're running more than one instance of ArchivesSpace on the +same hostname (i.e. multiple instances on different ports). +Default is "archivesspace". + +`AppConfig[:cookie_prefix] = "archivesspace"` + +### SUI Indexer settings + +The size of each batch of records passed to each indexer worker-thread to process and push to solr. +The periodic indexer can run using multiple threads to take advantage of +multiple CPU cores. By setting these two options, you can control how many +CPU cores are used, and the amount of memory that will be consumed by the +indexing process (more cores and/or more records per thread means more memory used). + +#### `AppConfig[:indexer_records_per_thread]` + +The size of each batch of records passed to each indexer worker-thread to process and push to solr. More records per thread means that more memory will be used by the indexer process. +`AppConfig[:indexer_records_per_thread] = 25` + +#### `AppConfig[:indexer_thread_count]` + +The number of worker-thread to be used by the SUI indexer. More worker-threads means that more CPU cores will be used. +`AppConfig[:indexer_thread_count] = 4` + +#### `AppConfig[:indexer_solr_timeout_seconds]` + +The indexer is making requests to solr in order to push updated records to the solr index. This is the maximum number of seconds that the indexer will wait for solr to respond to a request. + +`AppConfig[:indexer_solr_timeout_seconds] = 300` + +### PUI Indexer Settings + +#### `AppConfig[:pui_indexer_enabled]` + +If false no pui indexer is started. Set to false if not using the PUI at all. +`AppConfig[:pui_indexer_enabled] = true` + +#### `AppConfig[:pui_indexing_frequency_seconds]` + +The number of seconds between each run of the PUI indexer. The indexer will perform and indexing cycle every configured number of seconds. +`AppConfig[:pui_indexing_frequency_seconds] = 30` + +#### `AppConfig[:pui_indexer_records_per_thread]` + +The size of each batch of records passed to each indexer worker-thread to process and push to solr. +The PUI indexer can run using multiple threads to take advantage of +multiple CPU cores. By setting these two options, you can control how many +CPU cores are used, and the amount of memory that will be consumed by the +indexing process (more cores and/or more records per thread means more memory used). + +`AppConfig[:pui_indexer_records_per_thread] = 25` + +#### `AppConfig[:pui_indexer_thread_count]` + +The number of worker-thread to be used by the PUI indexer. More worker-threads means that more CPU cores will be used. +`AppConfig[:pui_indexer_thread_count] = 1` + +### Index state + +#### `AppConfig[:index_state_class]` + +The indexer needs a place to store it's state (keep track of which records have already been indexed). +Set to 'IndexState' (default) to store the state in the local `data` directory. +Set to 'IndexStateS3' (optional) to store the state in an AWS S3 bucket in the Amazon Cloud. + +`AppConfig[:index_state_class] = 'IndexState'` + +#### `AppConfig[:index_state_s3]` - Relevant only when using S3 storage for the indexer state + +If using S3 storage for the indexer state in amazon s3 (optional), you need to configure the access to S3. + +NOTE: S3 charges for read / update requests and the pui indexer is continually +writing to state files so you may want to increase `pui_indexing_frequency_seconds` and `solr_indexing_frequency_seconds` + +##### Configuring S3 access using environment variables (default) + +By default, the S3 configuration is fetched from the following shell environment variables: + +- `AWS_REGION` +- `AWS_ACCESS_KEY_ID` +- `AWS_SECRET_ACCESS_KEY` +- `AWS_ASPACE_BUCKET` + +It is using the `:cookie_prefix` configuration as a prefix for the state files stored in the bucket - usefull when using the same bucket to store indexer state of multiple archivesspace instances. + +##### Configuring S3 access using AppConfig variable in the `config.rb` file + +```ruby +AppConfig[:index_state_s3] = { + region: "us-east-1", + aws_access_key_id: "ASIAXXXXEXAMPLEID", + aws_secret_access_key: "xXxxXXxxXX/XXXXXX/XXXXXXXEXAMPLEKEY", + bucket: ENV.fetch("my-as-test-bucket"), + prefix: proc { "#{AppConfig[:cookie_prefix]}_" }, +} +``` + +You can use `prefix: "some random string"` instead of the above code that used the `:cookie_prefix` AppConfig variable. + +### Misc. database options + +#### `AppConfig[:allow_other_unmapped]` + +Allow assigning the special enumeration value `other_unmapped` for dynamic enum (controlled value) fields. When set to `true` `other_unmapped` is treated as a valid value for all enumeration (controlled value) fields. The `other_unmapped` value is added as a possible value for all controlled value lists. +This feature is designed for handling unmapped or unknown enumeration values, eventually useful during data migrations where source data may have values not yet defined in controlled value lists, or generally importing external data that uses values that are not already defined in a controlled value list. + +`AppConfig[:allow_other_unmapped] = false` + +#### `AppConfig[:db_url_redacted]` + +This is how the database url (which includes the database username and password) will appear in the logs. The default replaces the username and password with `REDACTED`, so that: +`"user=john&password=secret123"` +becomes +`"user=[REDACTED]&password=[REDACTED]"` + +`AppConfig[:db_url_redacted] = proc { AppConfig[:db_url].gsub(/(user|password)=(.*?)(&|$)/, '\1=[REDACTED]\3') }` + +#### `AppConfig[:demo_db_backup_schedule]` + +When using the embedded demo database (Apache Derby instead of MySQL) this is the schedule of the automated backups, in cron format. By default, it is at 4AM every day. + +`AppConfig[:demo_db_backup_schedule] = "0 4 * * *"` + +#### `AppConfig[:demo_db_backup_number_to_keep] = 7` + +How many backups to keep available when using the embedded demo database + +`AppConfig[:demo_db_backup_number_to_keep] = 7` + +#### `AppConfig[:allow_unsupported_database]` + +Set this to true if you are determined to use a database other than MySQL or the embedded demo database based on Apache Derby (not-recommended!). + +`AppConfig[:allow_unsupported_database] = false` + +#### `AppConfig[:allow_non_utf8_mysql_database]` + +Set this to true to skip the standard validation of the character encoding of MySQL tables being set to UTF8 (not-recommended!). + +`AppConfig[:allow_non_utf8_mysql_database] = false` + +### Proxy URLs + +If you are serving user-facing applications via proxy +(i.e., another domain or port, or via https, or for a prefix) it is +recommended that you record those URLs in your configuration + +#### `AppConfig[:frontend_proxy_url] = proc { AppConfig[:frontend_url] }` + +Proxy URL for the frontend (staff interface) + +`AppConfig[:frontend_proxy_url] = proc { AppConfig[:frontend_url] }` + +#### `AppConfig[:public_proxy_url]` + +Proxy URL for the public interface + +`AppConfig[:public_proxy_url] = proc { AppConfig[:public_url] }` + +#### `AppConfig[:oai_proxy_url]` + +Proxy URL for the oai service (if exposed, see OAI section) + +`AppConfig[:oai_proxy_url] = 'http://your-public-oai-url.example.com'` + +#### `AppConfig[:frontend_proxy_prefix]` + +Don't override this setting unless you know what you're doing + +`AppConfig[:frontend_proxy_prefix] = proc { "#{URI(AppConfig[:frontend_proxy_url]).path}/".gsub(%r{/+$}, "/") }` + +#### `AppConfig[:public_proxy_prefix]` + +Don't override this setting unless you know what you're doing + +`AppConfig[:public_proxy_prefix] = proc { "#{URI(AppConfig[:public_proxy_url]).path}/".gsub(%r{/+$}, "/") }` + +### Enable component applications + +Setting any of these false will prevent the associated applications from starting. +Temporarily disabling the frontend and public UIs and/or the indexer may help users +who are running into memory-related issues during migration. + +#### `AppConfig[:enable_backend]` + +`AppConfig[:enable_backend] = true` + +#### `AppConfig[:enable_frontend]` + +`AppConfig[:enable_frontend] = true` + +#### `AppConfig[:enable_public]` + +`AppConfig[:enable_public] = true` + +#### `AppConfig[:enable_solr]` + +`AppConfig[:enable_solr] = true` + +#### `AppConfig[:enable_indexer]` + +`AppConfig[:enable_indexer] = true` + +#### `AppConfig[:enable_docs]` + +`AppConfig[:enable_docs] = true` + +#### `AppConfig[:enable_oai]` + +`AppConfig[:enable_oai] = true` + +### Jetty shutdown + +Some use cases want the ability to shutdown the Jetty service using Jetty's +ShutdownHandler, which allows a POST request to a specific URI to signal +server shutdown. The prefix for this URI path is set to `/xkcd` to reduce the +possibility of a collision in the path configuration. So, full path would be + +`/xkcd/shutdown?token={randomly generated password}` + +The launcher creates a password to use this, which is stored +in the data directory. This is not turned on by default. + +#### `AppConfig[:use_jetty_shutdown_handler]` + +`AppConfig[:use_jetty_shutdown_handler] = false` + +#### `AppConfig[:jetty_shutdown_path]` + +`AppConfig[:jetty_shutdown_path] = "/xkcd"` + +### Managing multile backend instances + +If you have multiple instances of the backend running behind a load +balancer, list the URL of each backend instance here. This is used by the +real-time indexing, which needs to connect directly to each running +instance. + +By default we assume you're not using a load balancer, so we just connect +to the regular backend URL. + +#### `AppConfig[:backend_instance_urls]` + +`AppConfig[:backend_instance_urls] = proc { [AppConfig[:backend_url]] }` + +### Theme + +For theming customization, see https://docs.archivesspace.org/customization/theming/ + +#### `AppConfig[:frontend_theme]` + +Name of the theme to use on the Staff UI + +`AppConfig[:frontend_theme] = "default"` + +#### `AppConfig[:public_theme]` + +Name of the theme to use on the Public UI + +`AppConfig[:public_theme] = "default"` + +### Session expiration + +#### `AppConfig[:session_expire_after_seconds]` + +Sessions marked as expirable will timeout after this number of seconds of inactivity + +`AppConfig[:session_expire_after_seconds] = 3600` + +#### `AppConfig[:session_nonexpirable_force_expire_after_seconds]` + +Sessions marked as non-expirable will eventually expire too, but after a longer period. + +`AppConfig[:session_nonexpirable_force_expire_after_seconds] = 604800` + +### System usernames + +Hidden (not viewable on the Staff UI User management) system users are automatically created to be used by the indexer, the PUI and the Staff UI in order to access the backend API. + +#### `AppConfig[:search_username]` + +The user name of the hidden system user that the indexer uses to access the backend API +`AppConfig[:search_username] = "search_indexer"` + +#### `AppConfig[:public_username]` + +The user name of the hidden system user that the PUI uses to access the backend API + +`AppConfig[:public_username] = "public_anonymous"` + +#### `AppConfig[:staff_username]` + +The user name of the hidden system user that the Staff UI uses to access the backend API + +`AppConfig[:staff_username] = "staff_system"` + +### Authentication sources + +ArchivesSpace comes with its own user management functionality but can also be configured to authenticate against one or more [LDAP directories](/customization/ldap/). Oauth authentication is available using the [aspace-oauth plugin](https://github.com/lyrasis/aspace-oauth) + +`AppConfig[:authentication_sources] = []` + +### Misc. backlog and snapshot settings + +#### `AppConfig[:realtime_index_backlog_ms]` + +> TODO - Needs more documentation + +`AppConfig[:realtime_index_backlog_ms] = 60000` + +### Notifications configuration + +An internal notification mechanism is used to keep user preferences, enumeration (controlled value list) values, repository information etc. up to date within the UI while minimizing requests to the backend API. + +#### `AppConfig[:notifications_backlog_ms]` + +Notifications older that this amount of miliseconds are considered expired and will not be announced anymore. + +`AppConfig[:notifications_backlog_ms] = 60000` + +#### `AppConfig[:notifications_poll_frequency_ms]` + +How often should notifications be announced. + +`AppConfig[:notifications_poll_frequency_ms] = 1000` + +#### `AppConfig[:max_usernames_per_source]` + +> TODO - Needs more documentation + +`AppConfig[:max_usernames_per_source] = 50` + +#### `AppConfig[:demodb_snapshot_flag]` + +> TODO - Needs more documentation + +`AppConfig[:demodb_snapshot_flag] = proc { File.join(AppConfig[:data_directory], "create_demodb_snapshot.txt") }` + +### Report Configuration + +#### `AppConfig[:report_page_layout]` + +Uses valid values for the CSS3 @page directive's size property: +http://www.w3.org/TR/css3-page/#page-size-prop + +`AppConfig[:report_page_layout] = "letter"` + +#### `AppConfig[:report_pdf_font_paths]` + +> TODO - Needs more documentation + +`AppConfig[:report_pdf_font_paths] = proc { ["#{AppConfig[:backend_url]}/reports/static/fonts/dejavu/DejaVuSans.ttf"] }` + +#### `AppConfig[:report_pdf_font_family]` + +> TODO - Needs more documentation + +`AppConfig[:report_pdf_font_family] = "\"DejaVu Sans\", sans-serif"` + +### Plugins directory + +#### `AppConfig[:plugins_directory]` + +By default, the plugins directory will be in your ASpace Home. +If you want to override that, update this with an absolute path + +`AppConfig[:plugins_directory] = "plugins"` + +### Feedback + +#### `AppConfig[:feedback_url]` + +URL to direct the feedback link. +You can remove this from the footer by making the value blank. + +`AppConfig[:feedback_url] = "http://archivesspace.org/contact"` + +### User registration + +#### `AppConfig[:allow_user_registration]` + +Allow an unauthenticated user to create an account + +`AppConfig[:allow_user_registration] = true` + +### Help Configuration + +#### `AppConfig[:help_enabled]` + +> TODO - Needs more documentation + +`AppConfig[:help_enabled] = true` + +#### `AppConfig[:help_url]` + +> TODO - Needs more documentation + +`AppConfig[:help_url] = "https://archivesspace.atlassian.net/wiki/spaces/ArchivesSpaceUserManual/overview"`` + +#### `AppConfig[:help_topic_base_url]` + +> TODO - Needs more documentation + +`AppConfig[:help_topic_base_url] = "https://archivesspace.atlassian.net/wiki/spaces/ArchivesSpaceUserManual/pages/"`` + +### Shared storage + +#### `AppConfig[:shared_storage]` + +`AppConfig[:shared_storage] = proc { File.join(AppConfig[:data_directory], "shared") }` + +### Background jobs + +#### `AppConfig[:job_file_path]` + +Formerly known as :import_job_path + +> TODO - Needs more documentation + +`AppConfig[:job_file_path] = proc { AppConfig.has_key?(:import_job_path) ? AppConfig[:import_job_path] : File.join(AppConfig[:shared_storage], "job_files") }` + +#### `AppConfig[:job_poll_seconds]` + +> TODO - Needs more documentation + +`AppConfig[:job_poll_seconds] = proc { AppConfig.has_key?(:import_poll_seconds) ? AppConfig[:import_poll_seconds] : 5 }` + +#### `AppConfig[:job_timeout_seconds]` + +> TODO - Needs more documentation + +`AppConfig[:job_timeout_seconds] = proc { AppConfig.has_key?(:import_timeout_seconds) ? AppConfig[:import_timeout_seconds] : 300 }` + +#### `AppConfig[:jobs_cancelable]` + +By default, only allow jobs to be cancelled if we're running against MySQL (since we can rollback) + +`AppConfig[:jobs_cancelable] = proc { (AppConfig[:db_url] != AppConfig.demo_db_url).to_s }` + +### Locations + +#### `AppConfig[:max_location_range]` + +> TODO - Needs more documentation + +`AppConfig[:max_location_range] = 1000` + +### Schema Info check + +#### `AppConfig[:ignore_schema_info_check]` + +ASpace backend will not start if the db's schema_info version is not set +correctly for this version of ASPACE. This is to ensure that all the +migrations have run and completed before starting the app. You can override +this check here. Do so at your own peril. + +`AppConfig[:ignore_schema_info_check] = false` + +### Demo data + +#### `AppConfig[:demo_data_url]` + +This is a URL that points to some demo data that can be used for testing, +teaching, etc. To use this, set an OS environment variable of ASPACE_DEMO = true + +`AppConfig[:demo_data_url] = "https://s3-us-west-2.amazonaws.com/archivesspacedemo/latest-demo-data.zip"` + +### External IDs + +#### `AppConfig[:show_external_ids]` + +Expose external ids in the frontend + +`AppConfig[:show_external_ids] = false` + +### Jetty request/response buffer + +Set the allowed size of the request/response header that Jetty will accept +(anything bigger gets a 403 error). Note if you want to jack this size up, +you will also have to configure your Nginx/Apache as well if you're using that + +#### `AppConfig[:jetty_response_buffer_size_bytes]` + +`AppConfig[:jetty_response_buffer_size_bytes] = 64 * 1024` + +#### `AppConfig[:jetty_request_buffer_size_bytes]` + +`AppConfig[:jetty_request_buffer_size_bytes] = 64 * 1024` + +### Container management configuration fields + +#### `AppConfig[:container_management_barcode_length]` + +Defines global and repo-level barcode validations (validating on length only). +Barcodes that have either no value, or a value between :min and :max, will validate on save. +Set global constraints via :system_default, and use the repo_code value for repository-level constraints. +Note that :system_default will always inherit down its values when possible. + +`AppConfig[:container_management_barcode_length] = {:system_default => {:min => 5, :max => 10}, 'repo' => {:min => 9, :max => 12}, 'other_repo' => {:min => 9, :max => 9} }` + +#### `AppConfig[:container_management_extent_calculator]` + +Globally defines the behavior of the exent calculator. +Use :report_volume (true/false) to define whether space should be reported in cubic +or linear dimensions. +Use :unit (:feet, :inches, :meters, :centimeters) to define the unit which the calculator +reports extents in. +Use :decimal_places to define how many decimal places the calculator should return. + +Example: + +`AppConfig[:container_management_extent_calculator] = { :report_volume => true, :unit => :feet, :decimal_places => 3 }` + +### Record inheritance in public interface + +#### `AppConfig[:record_inheritance]` + +Define the fields for a record type that are inherited from ancestors +if they don't have a value in the record itself. +This is used in common/record_inheritance.rb and was developed to support +the new public UI application. +Note - any changes to record_inheritance config will require a reindex of pui +records to take affect. To do this remove files from indexer_pui_state + +```ruby +AppConfig[:record_inheritance] = { + :archival_object => { + :inherited_fields => [ + { + :property => 'title', + :inherit_directly => true + }, + { + :property => 'component_id', + :inherit_directly => false + }, + { + :property => 'language', + :inherit_directly => true + }, + { + :property => 'dates', + :inherit_directly => true + }, + { + :property => 'extents', + :inherit_directly => false + }, + { + :property => 'linked_agents', + :inherit_if => proc {|json| json.select {|j| j['role'] == 'creator'} }, + :inherit_directly => false + }, + { + :property => 'notes', + :inherit_if => proc {|json| json.select {|j| j['type'] == 'accessrestrict'} }, + :inherit_directly => true + }, + { + :property => 'notes', + :inherit_if => proc {|json| json.select {|j| j['type'] == 'scopecontent'} }, + :inherit_directly => false + }, + { + :property => 'notes', + :inherit_if => proc {|json| json.select {|j| j['type'] == 'langmaterial'} }, + :inherit_directly => false + }, + ] + } +} +``` + +To enable composite identifiers - added to the merged record in a property +`\_composite_identifier` + +The values for `:include_level` and `:identifier_delimiter` shown here are the defaults + +If `:include_level` is set to true then level values (eg Series) will be included in `\_composite_identifier` + +The `:identifier_delimiter` is used when joining the four part identifier for resources + +```ruby +AppConfig[:record_inheritance][:archival_object][:composite_identifiers] = { + :include_level => false, + :identifier_delimiter => ' ' +} +``` + +To configure additional elements to be inherited use this pattern in your config + +```ruby +AppConfig[:record_inheritance][:archival_object][:inherited_fields] << + { + :property => 'linked_agents', + :inherit_if => proc {|json| json.select {|j| j['role'] == 'subject'} }, + :inherit_directly => true + } +``` + +... or use this pattern to add many new elements at once + +```ruby +AppConfig[:record_inheritance][:archival_object][:inherited_fields].concat( + [ + { + :property => 'subjects', + :inherit_if => proc {|json| + json.select {|j| + ! j['_resolved']['terms'].select { |t| t['term_type'] == 'topical'}.empty? } + }, + :inherit_directly => true + }, + { + :property => 'external_documents', + :inherit_directly => false + }, + { + :property => 'rights_statements', + :inherit_directly => false + }, + { + :property => 'instances', + :inherit_directly => false + }, + ]) +``` + +If you want to modify any of the default rules, the safest approach is to uncomment +the entire default record_inheritance config and make your changes. +For example, to stop scopecontent notes from being inherited into file or item records +uncomment the entire record_inheritance default config above, and add a skip_if +clause to the scopecontent rule, like this: + +```ruby + { + :property => 'notes', + :skip_if => proc {|json| ['file', 'item'].include?(json['level']) }, + :inherit_if => proc {|json| json.select {|j| j['type'] == 'scopecontent'} }, + :inherit_directly => false + }, +``` + +### PUI Configurations + +#### `AppConfig[:pui_search_results_page_size]` + +`AppConfig[:pui_search_results_page_size] = 10` + +#### `AppConfig[:pui_branding_img]` + +`AppConfig[:pui_branding_img] = 'archivesspace.small.png'` + +#### `AppConfig[:pui_block_referrer]` + +`AppConfig[:pui_block_referrer] = true # patron privacy; blocks full 'referer' when going outside the domain` + +#### `AppConfig[:pui_max_concurrent_pdfs]` + +The number of PDFs we'll generate (in the background) at the same time. + +PDF generation can be a little memory intensive for large collections, so we +set this fairly low out of the box. + +`AppConfig[:pui_max_concurrent_pdfs] = 2` + +#### `AppConfig[:pui_pdf_timeout]` + +You can set this to nil or zero to prevent a timeout + +`AppConfig[:pui_pdf_timeout] = 600` + +#### `AppConfig[:pui_hide]` + +`AppConfig[:pui_hide] = {}` + +The following determine which 'tabs' are on the main horizontal menu: + +```ruby +AppConfig[:pui_hide][:repositories] = false +AppConfig[:pui_hide][:resources] = false +AppConfig[:pui_hide][:digital_objects] = false +AppConfig[:pui_hide][:accessions] = false +AppConfig[:pui_hide][:subjects] = false +AppConfig[:pui_hide][:agents] = false +AppConfig[:pui_hide][:classifications] = false +AppConfig[:pui_hide][:search_tab] = false +``` + +The following determine globally whether the various "badges" appear on the Repository page +can be overriden at repository level below (e.g.: +`AppConfig[:repos][{repo_code}][:hide][:counts] = true` + +```ruby +AppConfig[:pui_hide][:resource_badge] = false +AppConfig[:pui_hide][:record_badge] = true # hide by default +AppConfig[:pui_hide][:digital_object_badge] = false +AppConfig[:pui_hide][:accession_badge] = false +AppConfig[:pui_hide][:subject_badge] = false +AppConfig[:pui_hide][:agent_badge] = false +AppConfig[:pui_hide][:classification_badge] = false +AppConfig[:pui_hide][:counts] = false +``` + +The following determines globally whether the 'container inventory' navigation +tab/pill is hidden on resource/collection page + +``` +AppConfig[:pui_hide][:container_inventory] = false +``` + +#### `AppConfig[:pui_requests_permitted_for_types]` + +Determine when the request button is displayed + +`AppConfig[:pui_requests_permitted_for_types] = [:resource, :archival_object, :accession, :digital_object, :digital_object_component]` + +#### `AppConfig[:pui_requests_permitted_for_containers_only]` + +Set to 'true' if you want to disable if there is no top container + +`AppConfig[:pui_requests_permitted_for_containers_only] = false` + +#### `AppConfig[:pui_repos]` + +Repository-specific examples. Replace {repo_code} with your repository code, i.e. 'foo' - note the lower-case + +`AppConfig[:pui_repos] = {}` + +Examples: + +For a particular repository, only enable requests for certain record types (Note this configuration will override AppConfig[:pui_requests_permitted_for_types] for the repository) + +```ruby +AppConfig[:pui_repos]['foo'][:requests_permitted_for_types] = [:resource, :archival_object, :accession, :digital_object, :digital_object_component] +``` + +For a particular repository, disable request + +```ruby +AppConfig[:pui_repos]['foo'][:requests_permitted_for_containers_only] = true +``` + +Set the email address to send any repository requests: + +```ruby +AppConfig[:pui_repos]['foo'][:request_email] = {email address} +``` + +> TODO - Needs more documentation here + +```ruby +AppConfig[:pui_repos]['foo'][:hide] = {} +AppConfig[:pui_repos]['foo'][:hide][:counts] = true +``` + +#### `AppConfig[:pui_display_deaccessions]` + +> TODO - Needs more documentation + +`AppConfig[:pui_display_deaccessions] = true` + +#### `AppConfig[:pui_page_actions_cite]` + +Enable / disable PUI resource/archival object page 'cite' action + +`AppConfig[:pui_page_actions_cite] = true` + +#### `AppConfig[:pui_page_actions_bookmark]` + +Enable / disable PUI resource/archival object page 'bookmark' action + +`AppConfig[:pui_page_actions_bookmark] = true` + +#### `AppConfig[:pui_page_actions_request]` + +Enable / disable PUI resource/archival object page 'request' action + +`AppConfig[:pui_page_actions_request] = true` + +#### `AppConfig[:pui_page_actions_print]` + +Enable / disable PUI resource/archival object page 'print' action + +`AppConfig[:pui_page_actions_print] = true` + +#### `AppConfig[:pui_enable_staff_link]` + +When a user is authenticated, add a link back to the staff interface from the specified record + +`AppConfig[:pui_enable_staff_link] = true` + +#### `AppConfig[:pui_staff_link_mode]` + +By default, staff link will open record in staff interface in edit mode, +change this to 'readonly' for it to open in readonly mode + +`AppConfig[:pui_staff_link_mode] = 'edit'` + +#### `AppConfig[:pui_page_custom_actions]` + +Add page actions via the configuration + +`AppConfig[:pui_page_custom_actions] = []` + +Javascript action example: + +```ruby +AppConfig[:pui_page_custom_actions] << { + 'record_type' => ['resource', 'archival_object'], # the jsonmodel type to show for + 'label' => 'actions.do_something', # the I18n path for the action button + 'icon' => 'fa-paw', # the font-awesome icon CSS class + 'onclick_javascript' => 'alert("do something grand");', +} +``` + +Hyperlink action example: + +```ruby +AppConfig[:pui_page_custom_actions] << { + 'record_type' => ['resource', 'archival_object'], # the jsonmodel type to show for + 'label' => 'actions.do_something', # the I18n path for the action button + 'icon' => 'fa-paw', # the font-awesome icon CSS class + 'url_proc' => proc {|record| 'http://example.com/aspace?uri='+record.uri}, +} +``` + +Form-POST action example: + +```ruby +AppConfig[:pui_page_custom_actions] << { + 'record_type' => ['resource', 'archival_object'], # the jsonmodel type to show for + 'label' => 'actions.do_something', # the I18n path for the action button + 'icon' => 'fa-paw', # the font-awesome icon CSS class + # 'post_params_proc' returns a hash of params which populates a form with hidden inputs ('name' => 'value') + 'post_params_proc' => proc {|record| {'uri' => record.uri, 'display_string' => record.display_string} }, + # 'url_proc' returns the URL for the form to POST to + 'url_proc' => proc {|record| 'http://example.com/aspace?uri='+record.uri}, + # 'form_id' as string to be used as the form's ID + 'form_id' => 'my_grand_action', +} +``` + +ERB action example: + +```ruby +AppConfig[:pui_page_custom_actions] << { + 'record_type' => ['resource', 'archival_object'], + # the jsonmodel type to show for + # 'erb_partial' returns the path to an erb template from which the action will be rendered + 'erb_partial' => 'shared/my_special_action', +} +``` + +#### `AppConfig[:pui_email_enabled]` + +PUI email settings (logs emails when disabled) + +`AppConfig[:pui_email_enabled] = false` + +#### `AppConfig[:pui_email_override]` + +See above AppConfig[:pui_repos][{repo_code}][:request_email] for setting repository email overrides +'pui_email_override' for testing, this email will be the to-address for all sent emails + +`AppConfig[:pui_email_override] = 'testing@example.com'` + +#### `AppConfig[:pui_request_email_fallback_to_address]` + +The 'to' email address for repositories that don't define their own email + +`AppConfig[:pui_request_email_fallback_to_address] = 'testing@example.com'` + +#### `AppConfig[:pui_request_email_fallback_from_address]` + +The 'from' email address for repositories that don't define their own email + +`AppConfig[:pui_request_email_fallback_from_address] = 'testing@example.com'` + +#### `AppConfig[:pui_request_use_repo_email]` + +Use the repository record email address for requests (overrides config email) + +`AppConfig[:pui_request_use_repo_email] = false` + +#### `AppConfig[:pui_email_delivery_method]` + +`AppConfig[:pui_email_delivery_method] = :sendmail` + +#### `AppConfig[:pui_email_sendmail_settings]` + +```ruby +AppConfig[:pui_email_sendmail_settings] = { + location: '/usr/sbin/sendmail', + arguments: '-i' +} +``` + +#### `AppConfig[:pui_email_smtp_settings]` + +Apply when `AppConfig[:pui_email_delivery_method]` set to `:smtp` + +Example SMTP configuration: + +```ruby +AppConfig[:pui_email_smtp_settings] = { + address: 'smtp.gmail.com', + port: 587, + domain: 'gmail.com', + user_name: '<username>', + password: '<password>', + authentication: 'plain', + enable_starttls_auto: true, +} +``` + +#### `AppConfig[:pui_email_perform_deliveries]` + +`AppConfig[:pui_email_perform_deliveries] = true` + +#### `AppConfig[:pui_email_raise_delivery_errors]` + +`AppConfig[:pui_email_raise_delivery_errors] = true` + +#### `AppConfig[:pui_readmore_max_characters]` + +The number of characters to truncate before showing the 'Read More' link on notes + +`AppConfig[:pui_readmore_max_characters] = 450` + +#### `AppConfig[:pui_expand_all]` + +Whether to expand all additional information blocks at the bottom of record pages by default. `true` expands all blocks, `false` collapses all blocks. + +`AppConfig[:pui_expand_all] = false` + +#### `AppConfig[:max_search_columns]` + +Use to specify the maximum number of columns to display when searching or browsing + +`AppConfig[:max_search_columns] = 7` diff --git a/src/content/docs/es/customization/index.md b/src/content/docs/es/customization/index.md new file mode 100644 index 0000000..fd97d72 --- /dev/null +++ b/src/content/docs/es/customization/index.md @@ -0,0 +1,13 @@ +--- +title: Customization and configuration +description: Index of the pages within the Customization section of the website. +--- + +- [Configuring ArchivesSpace](./configuration) +- [Configuring LDAP authentication](./ldap) +- [Adding support for additional username/password-based authentication backends](./authentication) +- [Customizing text in ArchivesSpace](./locales) +- [ArchivesSpace Plug-ins](./plugins) +- [Theming ArchivesSpace](./theming) +- [Managing frontend assets with Bower](./bower) +- [Adding custom reports](./reports) diff --git a/src/content/docs/es/customization/ldap.md b/src/content/docs/es/customization/ldap.md new file mode 100644 index 0000000..ca4ac29 --- /dev/null +++ b/src/content/docs/es/customization/ldap.md @@ -0,0 +1,70 @@ +--- +title: LDAP authentication +description: Instructions on how to manage and authenticate against one or more LDAP directories. +--- + +ArchivesSpace can manage its own user directory, but can also be +configured to authenticate against one or more LDAP directories by +specifying them in the application's configuration file. When a user +attempts to log in, each authentication source is tried until one +matches. + +Here is a minimal example of an LDAP configuration: + +```ruby +AppConfig[:authentication_sources] = [{ + :model => 'LDAPAuth', + :hostname => 'ldap.example.com', + :port => 389, + :base_dn => 'ou=people,dc=example,dc=com', + :username_attribute => 'uid', + :attribute_map => {:cn => :name}, +}] +``` + +With this configuration, ArchivesSpace performs authentication by +connecting to `ldap://ldap.example.com:389/`, binding anonymously, +searching the `ou=people,dc=example,dc=com` tree for `uid = <username>`. + +If the user is found, ArchivesSpace authenticates them by +binding using the password specified. Finally, the `:attribute_map` +entry specifies how LDAP attributes should be mapped to ArchivesSpace +user attributes (mapping LDAP's `cn` to ArchivesSpace's `name` in the +above example). + +Many LDAP directories don't support anonymous binding. To integrate +with such a directory, you will need to specify the username and +password of a user with permission to connect to the directory and +search for other users. Modifying the previous example for this case +looks like this: + +```ruby +AppConfig[:authentication_sources] = [{ + :model => 'LDAPAuth', + :hostname => 'ldap.example.com', + :port => 389, + :base_dn => 'ou=people,dc=example,dc=com', + :username_attribute => 'uid', + :attribute_map => {:cn => :name}, + :bind_dn => 'uid=archivesspace_auth,ou=system,dc=example,dc=com', + :bind_password => 'secretsquirrel', +}] +``` + +Finally, some LDAP directories enforce the use of SSL encryption. To +configure ArchivesSpace to connect via LDAPS, change the port as +appropriate and specify the `encryption` option: + +```ruby +AppConfig[:authentication_sources] = [{ + :model => 'LDAPAuth', + :hostname => 'ldap.example.com', + :port => 636, + :base_dn => 'ou=people,dc=example,dc=com', + :username_attribute => 'uid', + :attribute_map => {:cn => :name}, + :bind_dn => 'uid=archivesspace_auth,ou=system,dc=example,dc=com', + :bind_password => 'secretsquirrel', + :encryption => :simple_tls, +}] +``` diff --git a/src/content/docs/es/customization/locales.md b/src/content/docs/es/customization/locales.md new file mode 100644 index 0000000..f408128 --- /dev/null +++ b/src/content/docs/es/customization/locales.md @@ -0,0 +1,78 @@ +--- +title: Customizing text +description: Instructions for customizing text in ArchivesSpace using locale files, including how to override labels, messages, tooltips, and placeholders via the Rails I18n API. +--- + +ArchivesSpace has abstracted all the labels, messages and tooltips out of the +application into the locale files, which are part of the +[Rails Internationalization (I18n)](http://guides.rubyonrails.org/i18n.html) API. +The locales in this directory represent the +basis of translations for use by all Archives Space applications. Each +application may then add to or override these values with their own locales files. + +For a guide on managing these "i18n" files, please visit http://guides.rubyonrails.org/i18n.html + +You can see the source files for both the [Staff Frontend Application](https://github.com/archivesspace/archivesspace/tree/master/frontend/config/locales) and +[Public Application](https://github.com/archivesspace/archivesspace/tree/master/public/config/locales). There is also a [common locale file](https://github.com/archivesspace/archivesspace/blob/master/common/locales/en.yml) for some values used throughout the ArchivesSpace applications. + +The base translations are broken up: + +- The top most file "en.yml" contains the translations for all the record labels, messages and tooltips in English +- "enums/en.yml" contains the entries for the dynamic enumeration codes - add your translations to this file after importing your enumeration codes + +These values are pulled into the views using the I18n.t() method, like I18n.t("brand.welcome_message"). + +If the value you want to override is in the common locale file (like the "digital object title" field label, for example) , you can change this by simply editing the locales/en.yml file in your ArchivesSpace distribution home directory. A restart is required to have the changes take effect. + +If the value you want to change is in either the public or staff specific en.yml files, you can override these values using the plugins directory. For example, if you want to change the welcome message on the public frontend, make a file in your ArchivesSpace distribution called 'plugins/local/public/locales/en.yml' and put the following values: + +```yaml +en: + brand: + title: My Archive + home: Home + +welcome_message: HEY HEY HEY!! +``` + +If you restart ArchivesSpace, these values will take effect. + +If you are adding a new value you will also need to add the value into the Staff Frontend Application by clicking on the System dropdown menu and choosing Manage Controlled Value Lists. Select the list and add the value. If you restart ArchivesSpace the translation value that you set in the yml file should appear. + +If you're using a different language, simply swap out the en.yml for something else ( like fr.yml ) and update locale setting in the config.rb file ( i.e., AppConfig[:locale] = :fr ) + +## Tooltips + +To add a tooltip to a record label, simply add a new entry with "\_tooltip" +appended to the label's code. For example, to add a tooltip for the Accession's +Title field: + +```yaml +en: + accession: + title: Title + title_tooltip: | + <p>The title assigned to an accession or resource. The accession title + need not be the same as the resource title. Moreover, a title need not + be expressed for the accession record, as it can be implicitly + inherited from the resource record to which the accession is + linked.</p> +``` + +## Placeholders + +For text fields or text areas, you may like to have some placeholder text to be +displayed when the field is empty (for more details see +http://www.w3.org/html/wg/drafts/html/master/forms.html#the-placeholder-attribute). +Please note while most modern browser releases support this feature, +older version will not. + +To add a placeholder to a record's text field, add a new entry of the label's +code append with "\_placeholder". For example: + +```yaml +en: + accession: + title: Title + title_placeholder: See DACS 2.3.18-2.3.22 +``` diff --git a/src/content/docs/es/customization/plugins.md b/src/content/docs/es/customization/plugins.md new file mode 100644 index 0000000..c9c4f95 --- /dev/null +++ b/src/content/docs/es/customization/plugins.md @@ -0,0 +1,343 @@ +--- +title: Plugins +description: An overview of how to develop, structure, enable, and configure plugins in ArchivesSpace to customize application behavior, interface, branding, and search functionality without altering core code. +--- + +Plugins are a powerful feature, designed to allow you to change +most aspects of how the application behaves. + +Plugins provide a mechanism to customize ArchivesSpace by overriding or extending functions +without changing the core codebase. As they are self-contained, they also permit the ready +sharing of packages of customization between ArchivesSpace instances. + +The ArchivesSpace distribution comes with the `hello_world` exemplar plugin. Please refer to its [README file](https://github.com/archivesspace/archivesspace/blob/master/plugins/hello_world/README.md) for a detailed description of how it is constructed and implemented. + +You can find other examples in the following plugin repositories. The ArchivesSpace plugins that are officially supported and maintained by the ArchivesSpace Program Team are in archivesspace-plugins (https://github.com/archivesspace-plugins). Deprecated code which is no longer supported but has been kept for future reference is in archivesspace-deprecated (https://github.com/archivesspace-deprecated). There is an open/unmanaged GitHub repository where community members can share their code called archivesspace-labs (https://github.com/archivesspace-labs). The community developed Python library for interacting with the ArchivesSpace API, called ArchivesSnake, is managed in the archivesspace-labs repository. + +## Enabling plugins + +Plugins are enabled by placing them in the `plugins` directory, and referencing them in the +ArchivesSpace configuration, `config/config.rb`. For example: + +```ruby +AppConfig[:plugins] = ['local', 'hello_world', 'my_plugin'] +``` + +This configuration assumes the following directories exist: + + plugins + hello_world + local + my_plugin + +Note that the order that the plugins are listed in the `:plugins` configuration option +determines the order in which they are loaded by the application. + +## Plugin structure + +The directory structure within a plugin is similar to the structure of the core application. +The following shows the supported plugin structure. Files contained in these directories can +be used to override or extend the behavior of the core application. + + backend + controllers ......... backend endpoints + model ............... database mapping models + converters .......... classes for importing data + job_runners ......... classes for defining background jobs + plugin_init.rb ...... if present, loaded when the backend first starts + lib/bulk_import ..... bulk import processor + frontend + assets .............. static assets (such as images, javascript) in the staff interface + controllers ......... controllers for the staff interface + locales ............. locale translations for the staff interface + views ............... templates for the staff interface + plugin_init.rb ...... if present, loaded when the staff interface first starts + public + assets .............. static assets (such as images, javascript) in the public interface + controllers ......... controllers for the public interface + locales ............. locale translations for the public interface + views ............... templates for the public interface + plugin_init.rb ...... if present, loaded when the public interface first starts + migrations ............ database migrations + schemas ............... JSONModel schema definitions + search_definitions.rb . Advanced search fields + +**Note** that `backend/lib/bulk_import` is the only directory in `backend/lib/` that is loaded by the plugin manager. Other files in `backend/lib/` will not be loaded during startup. + +**Note** that, in order to override or extend the behavior of core models and controllers, you cannot simply put your replacement with the same name in the corresponding directory path. Core models and controllers can be overridden by adding an `after_initialize` block to `plugin_init.rb` (e.g. [aspace-hvd-pui](https://github.com/harvard-library/aspace-hvd-pui/blob/master/public/plugin_init.rb#L43)). + +## Overriding behavior + +A general rule is: to override behavior, rather then extend it, match the path +to the file that contains the behavior to be overridden. + +It is not necessary for a plugin to have all of these directories. For example, to override +some part of a locale file for the staff interface, you can just add the following structure +to the local plugin: + + plugins/local/frontend/locales/en.yml + +More detailed information about overriding locale files is found in [Customizing text in ArchivesSpace](/customization/locales) + +## Overriding the visual (web) presentation + +You can directly override any view file in the core application by placing an erb file of the same name in the analogous path. +For example, if you want to override the appearance of the "Welcome" [home] page of the Public User Interface, you can make your changes to a file `show.html.erb` and place it at `plugins/my_fine_plugin/public/views/welcome/show.html.erb`. (Where _my_fine_plugin_ is the name of your plugin) + +### Implementing a broadly-applied style or javascript change + +Unless you want to write inline style or javascript (which may be practiceable for a template or two), best practice is to create `plugins/my_fine_plugin/public/views/layout_head.html.erb` or `plugins/my_fine_plugin/frontend/views/layout_head.html.erb`, which contains the HTML statements to incorporate your javascript or css into the `<HEAD>` element of the template. Here's an example: + +- For the public interface, I want to change the size of the text in all links when the user is hovering. + - I create `plugins/my_fine_plugin/public/assets/my.css`: + ```css + a:hover { + font-size: 2em; + } + ``` + - I create `plugins/my_fine_plugin/public/views/layout_head.html.erb`, and insert: + ```ruby + <%= stylesheet_link_tag "#{@base_url}/assets/my.css", media: :all %> + ``` +- For the public interface, I want to add some javascript behavior such that, when the user hovers over a list item, astericks appear + - I create `plugins/my_fine_plugin/public/assets/my.js`" + ```javascript + $(function () { + $('li').hover( + function () { + $(this).append($('<span> ***</span>')) + }, + function () { + $(this).find('span:last').remove() + } + ) + }) + ``` + - I add to `plugins/my_fine_plugin/public/views/layout_head.html.erb`: + ```ruby + <%= javascript_include_tag "#{@base_url}/assets/my.js" %> + ``` + +## Adding your own branding + +Another example, to override the branding of the staff interface, add +your own template at: + + plugins/local/frontend/views/site/\_branding.html.erb + +Files such as images, stylesheets and PDFs can be made available as static resources by +placing them in an `assets` directory under an enabled plugin. For example, the following file: + + plugins/local/frontend/assets/my_logo.png + +Will be available via the following URL: + + http://your.frontend.domain.and:port/assets/my_logo.png + +For example, to reference this logo from the custom branding file, use +markup such as: + +```erb + <div class="container branding"> + <img src="<%= #{AppConfig[:frontend_proxy_prefix]} %>assets/my_logo.png" alt="My logo" /> + </div> +``` + +## Customizing the favicon + +A favicon is an icon associated with a web page that browser and operating systems display (ie: in a browser's address bar or tab, next to the web page name in a bookmark list, etc.). + +### Default images + +The ArchivesSpace favicons are stored in the top-level `public/` directory of the frontend and public applications. + +1. `frontend/public/favicon-AS.png` +2. `frontend/public/favicon-AS.svg` +3. `public/public/favicon-AS.png` +4. `public/public/favicon-AS.svg` + +### Markup + +Favicon markup is found in each application's favicon partial template: + +1. `frontend/app/views/site/\_favicon.html.erb` +2. `public/app/views/shared/\_favicon.html.erb` + +### Configuration + +Favicons are shown by default via the configuration options in `config.rb` (or `common/config/config-defaults.rb` in development). Set the respective option to `false` to not show a favicon. + +```rb +# config.rb +AppConfig[:pui_show_favicon] = true # whether or not to show a favicon +AppConfig[:frontend_show_favicon] = true # whether or not to show a favicon +``` + +### Plugin examples + +Replace the default favicon with your own via a plugin. + +:::caution[Reserved favicon filenames] +Custom favicon files must be named something other than `favicon-AS.png` and `favicon-AS.svg` in order to override the default favicon. +::: + +#### Frontend + +The frontend plugin should have the following directory structure: + +``` +plugins/local/frontend/ +├── assets +│   ├── favicon.png +│   └── favicon.svg +└── views + └── site + └── _favicon.html.erb +``` + +The frontend favicon template should look something like: + +```erb +<!-- plugins/local/frontend/views/site/_favicon.html.erb --> +<link rel="icon" type="image/png" href="<%= AppConfig[:frontend_proxy_prefix] %>assets/favicon.png"> +<link rel="icon" type="text/svg+xml" href="<%= AppConfig[:frontend_proxy_prefix] %>assets/favicon.svg"> +``` + +#### Public + +The public plugin should have the following directory structure: + +``` +plugins/local/public/ +├── assets +│   ├── favicon.png +│   └── favicon.svg +└── views + └── shared + └── _favicon.html.erb +``` + +The public favicon template should look something like: + +```erb +<!-- plugins/local/public/views/shared/_favicon.html.erb --> +<link rel="icon" type="image/png" href="<%= asset_path('favicon.png', skip_pipeline: true) %>"> +<link rel="icon" type="image/svg+xml" href="<%= asset_path('favicon.svg', skip_pipeline: true) %>"> +``` + +## Plugin configuration + +Plugins can optionally contain a configuration file at `plugins/[plugin-name]/config.yml`. +This configuration file supports the following options: + + system_menu_controller + The name of a controller that will be accessible via a Plugins menu in the System toolbar + repository_menu_controller + The name of a controller that will be accessible via a Plugins menu in the Repository toolbar + parents + [record-type] + name + cardinality + ... + +`system_menu_controller` and `repository_menu_controller` specify the names of frontend controllers +that will be accessible via the system and repository toolbars respectively. A `Plugins` dropdown +will appear in the toolbars if any enabled plugins have declared these configuration options. The +controller name follows the standard naming conventions, for example: + +```ruby +repository_menu_controller: hello_world +``` + +Points to a controller file at `plugins/hello_world/frontend/controllers/hello_world_controller.rb` +which implements a controller class called `HelloWorldController`. When the menu item is selected +by the user, the `index` action is called on the controller. + +Note that the URLs for plugin controllers are scoped under `plugins`, so the URL for the above +example is: + + http://your.frontend.domain.and:port/plugins/hello_world + +Also note that the translation for the plugin's name in the `Plugins` dropdown menu is specified +in a locale file in the `frontend/locales` directory in the plugin. For example, in the `hello_world` +example there is an English locale file at: + + plugins/hello_world/frontend/locales/en.yml + +The translation for the plugin name in the `Plugins` dropdown menus is specified by the key `label` +under the plugin, like this: + +```yaml +en: + plugins: + hello_world: + label: Hello World +``` + +Note that the example locale file contains other keys that specify translations for text displayed +as part of the plugin's user interface. Be sure to place your plugin's translations as shown, under +`plugins.[your_plugin_name]` in order to avoid accidentally overriding translations for other +interface elements. In the example above, the translation for the `label` key can be referenced +directly in an erb view file as follows: + +```ruby +<%= I18n.t("plugins.hello_world.label") %> +``` + +Each entry under `parents` specifies a record type that this plugin provides a new subrecord for. +`[record-type]` is the name of the existing record type, for example `accession`. `name` is the +name of the plugin in its role as a subrecord of this parent, for example `hello_worlds`. +`cardinality` specifies the cardinality of the plugin records. Currently supported values are +`zero-to-many` and `zero-to-one`. + +## Changing search behavior + +A plugin can add additional fields to the advanced search interface by +including a `search_definitions.rb` file at the top-level of the +plugin directory. This file can contain definitions such as the +following: + +```ruby +AdvancedSearch.define_field(:name => 'payment_fund_code', :type => :enum, :visibility => [:staff], :solr_field => 'payment_fund_code_u_utext') +AdvancedSearch.define_field(:name => 'payment_authorizers', :type => :text, :visibility => [:staff], :solr_field => 'payment_authorizers_u_utext') +``` + +Each field defined will appear in the advanced search interface as a +searchable field. The `:visibility` option controls whether the field +is presented in the staff or public interface (or both), while the +`:type` parameter determines what sort of search is being performed. +Valid values are `:text:`, `:boolean`, `:date` and `:enum`. Finally, +the `:solr_field` parameter controls which field is used from the +underlying index. + +## Adding Custom Reports + +Custom reports may be added to plugins by adding a new report model as a subclass of `AbstractReport` to `plugins/[plugin-name]/backend/model/`, and the translations for said model to `plugins/[plugin-name]/frontend/locales/[language].yml`. Look to existing reports in reports subdirectory of the ArchivesSpace base directory for examples of how to structure a report model. + +There are several limitations to adding reports to plugins, including that reports from plugins may only use the generic report template. ArchivesSpace only searches for report templates in the reports subdirectory of the ArchivesSpace base directory, not in plugin directories. If you would like to implement a custom report with a custom template, consider adding the report to `archivesspace/reports/` instead of `archivesspace/plugins/[plugin-name]/backend/model/`. + +## Frontend Specific Hooks + +To make adding new records fields and sections to record forms a little eaiser via your plugin, the ArchivesSpace frontend provides a series of hooks via the `frontend/config/initializers/plugin.rb` module. These are as follows: + +- `Plugins.add_search_base_facets(*facets)` - add to the base facets list to include extra facets for all record searches and listing pages. + +- `Plugins.add_search_facets(jsonmodel_type, *facets)` - add facets for a particular JSONModel type to be included in searches and listing pages for that record type. + +- `Plugins.add_resolve_field(field_name)` - use this when you have added a new field/relationship and you need it to be resolved when the record is retrieved from the API. + +- `Plugins.register_edit_role_for_type(jsonmodel_type, role)` - when you add a new top level JSONModel, register it and its edit role so the listing view can determine if the "Edit" button can be displayed to the user. + +- `Plugins.register_note_types_handler(proc)` where proc handles parameters `jsonmodel_type, note_types, context` - allow a plugin to customize the note types shown for particular JSONModel type. For example, you can filter those that do not apply to your institution. + +- `Plugins.register_plugin_section(section)` - allows you define a template to be inserted as a section for a given JSONModel record. A section is a type of `Plugins::AbstractPluginSection` which defines the source `plugin`, section `name`, the `jsonmodel_types` for which the section should show and any `opts` required by the templates at the time of render. These new sections (readonly, edit and sidebar additions) are output as part of the `PluginHelper` render methods. + + `Plugins::AbstractPluginSection` can be subclassed to allow flexible inclusion of arbitrary HTML. There are two examples provided with ArchivesSpace: + - `Plugins::PluginSubRecord` - uses the `shared/subrecord` partial to output a standard styled ArchivesSpace section. `opts` requires the jsonmodel field to be defined. + + - `Plugins::PluginReadonlySearch` - uses the `search/embedded` partial to output a search listing as a section. `opts` requires the custom filter terms for this search to be defined. + +## Further information + +**Be sure to test your plugin thoroughly as it may have unanticipated impacts on your +ArchivesSpace application.** diff --git a/src/content/docs/es/customization/reports.md b/src/content/docs/es/customization/reports.md new file mode 100644 index 0000000..343513a --- /dev/null +++ b/src/content/docs/es/customization/reports.md @@ -0,0 +1,51 @@ +--- +title: Reports +description: Instructions for creating custom reports and subreports in ArchivesSpace, including required structure, SQL usage, translations, optional customization methods, and integration with the reporting framework. +--- + +Adding a report is intended to be a fairly simple process. The requirements for creating a report are outlined below. + +## Adding a Report + +### Required + +- Create a class for your report that is a subclass of AbstractReport. +- Call register_report. If your report has any parameters, specify them here. +- Implement query_string + - This should be a raw SQL string + - To prevent SQL injection, use db.literal for any user input i.e. use `"select * from table where column = #{db.literal(value)}" ` instead of `"select * from table where column = '#{value}'"` +- Provide translations for column headers and the title of your report + - They should be in yml files under _language_.reports._report name_ + - The translation for title should be whatever you want the name of the report to be. + - If the translation you want is already in _language_.reports.translation_defaults (found in the static folder) you do not need to specify it. + - Translations specific to the individual report are given priority over translation defaults. + +### Optional + +- Implement your own initializer if your report has any parameters. +- Implement fix_row in order to clean up data and add subreports. + - Each result will be passed to fix_row as a hash + - ReportUtils offers various class methods to simplify cleaning up data. + - You can also add subreports here with something like `row[:subreport_name] = SubreportClassName.new(self, row[:id]).get_content` where row is the result as a hash which was a parameter to fix_row. See [Adding a Subreport](#adding-a-subreport) for more information on adding subreports. + - Sometimes you will want to delete something from the result that you needed in order to generate a subreport but do not want to show up in the final report (such as id). To do this use `row.delete(:id)`. +- Special implementation of query - The default implementation is simply `db.fetch(query_string)` but implementing it yourself may give you more flexibility. In the end, it needs to return a result set. +- There is a hash called info that controls what shows up in the header at the top of the report. Examples may include total record count, total extent, or any parameters that are provided by the user for your report. Add anything you want to show up in the report header to info. Repository name will be included automatically. Be sure to provide translations for the keys you add to info. +- after_tasks is run after fix_row executes on all the results. Implement this if you have anything that needs to get done here before the report is rendered +- Specify identifier_field if you want to add a heading to each individual record. For instance, identifier_field might be `:accession_number` for a report on accessions. +- Implement page_break to be false if you do not want a page break after each record in the PDF of the report. +- Implement special_translation if there is anything you want translate in a special way (i.e. it can't be accomplished by the yml file). + +## Adding A Subreport + +### Required + +- Create a class for your subreport that is a subclass of AbstractSubreport. +- Create an initializer that takes in the parent report/subreport as well as any parameters you need to run the subreport (usually this is just an id from the result in the parent report/subreport). Your initializer should call `super(parent_report)`. +- Implement query_string. This works the same way as it does for reports. +- Provide necessary translations. + +### Optional + +- Special implementation of query +- fix_row works just like in reports + - note that you can add nested subreports diff --git a/src/content/docs/es/customization/theming.md b/src/content/docs/es/customization/theming.md new file mode 100644 index 0000000..9e15c0a --- /dev/null +++ b/src/content/docs/es/customization/theming.md @@ -0,0 +1,141 @@ +--- +title: Theming +description: A guide to customizing the look and feel of ArchivesSpace using plugins or full theme rebuilds, including instructions for changing logos, CSS, and layout elements in both the public and staff interfaces. +--- + +## Making small changes + +It's easiest to use a plugin for small changes to your site's theme. With a plugin, +we can override default views, controllers, models, etc. without having to do a +complete rebuild of the source code. Be sure to remove the `#` at the beginning of +any line that you want to change. Any line that starts with a `#` is ignored. + +Let's say we wanted to change the branding logo on the public +interface. That can be easily changed in your `config.rb` file: + +```ruby +AppConfig[:pui_branding_img] +``` + +That setting is used by the file found in `public/app/views/shared/_header.html.erb` to display your PUI side logo. You don't need to change that file, only the setting in your `config.rb` file. + +You can store the image in `plugins/local/public/assets/images/logo.png` You'll most likely need to create one or more of the directories. + +Your `AppConfig[:pui_branding_img]` setting should look something like this: + +```ruby +AppConfig[:pui_branding_img] = '/assets/images/logo.png' +``` + +Alt text for the PUI branding image can and should also be supplied via: + +```ruby +AppConfig[:pui_branding_img_alt_text] = 'My alt text' +``` + +Be sure to remove the `#` at the beginning of +any line that you want to change. Any line that starts with a `#` is ignored. + +If you want your image on the PUI to link out to another location, you will need to make a change to the file `public/app/views/shared/_header.html.erb`. The line that creates the logo just needs a `a href` added. You should also alter `AppConfig[:pui_branding_img_alt_text]` to make it clear that the image also functions as a link (e.g. `AppConfig[:pui_branding_img_alt_text] = 'Back to Example College Home'`). That will end up looking something like this: + +```erb +<div class="col-sm-3 hidden-xs"><a href="https://example.com"><img class="logo" src="<%= asset_path(AppConfig[:pui_branding_img]) %>" alt="<%= AppConfig[:pui_branding_img_alt_text] %>" /></a></div> +``` + +The Staff Side logo will need a small plugin file and cannot be set in your `config.rb` file. This needs to be changed in the `plugins/local/frontend/views/site/_branding.html.erb` file. You'll most likely need to create one or more of the directories. Then create that `_branding.html.erb` file and paste in the following code: + +```erb +<div class="container-fluid navbar-branding"> + <%= image_tag "archivesspace/archivesspace.small.png", :class=>"img-responsive", :alt=>"My image alt text" %> +</div> +``` + +Change the `"archivesspace/archivesspace.small.png"` to the path to your image `/assets/images/logo.png` and place your logo in the `plugins/local/frontend/assets/images/` directory. You'll most likely need to create one or more of the directories. + +**Note:** Since anything we add to plugins directory will not be precompiled by +the Rails asset pipeline, we cannot use some of the tag helpers +(like img_tag ), since that's assuming the asset is being managed by the +asset pipeline. + +Restart the application and you should see your logo in the default view. + +## Adding CSS rules + +You can customize CSS through the plugin system too. If you don't want to create +a whole new plugin, the easiest way is to modify the 'local' plugin that ships +with ArchivesSpace (it's intended for these kind of site-specific changes). As +long as you've still got 'local' listed in your AppConfig[:plugins] list, your +changes will get picked up. + +To do that, create a file called +`archivesspace/plugins/local/frontend/views/layout_head.html.erb` for the staff +side or `archivesspace/plugins/local/public/views/layout_head.html.erb` for the +public. Then you can add the line to include the CSS in the site: + +```erb +<%= stylesheet_link_tag "#{@base_url}/assets/custom.css" %> +``` + +Then place your CSS in the file: + + staff side: + archivesspace/plugins/local/frontend/assets/custom.css + or public side: + archivesspace/plugins/local/public/assets/custom.css + +and it will get loaded on each page. + +You may also want to make changes to the main index page, or the header and +footer. Those overrides would go into the following places for the public side +of your site: + + archivesspace/plugins/local/public/views/welcome/show.html.erb + archivesspace/plugins/local/public/views/shared/_header.html.erb + archivesspace/plugins/local/public/views/shared/_footer.html.erb + +## Heavy re-theming + +If you're wanting to really trick out your site, you could do this in a plugin +using the override methods shown above, although there are some big disadvantages +to this. The first is that assets will not be compiled by the Rails asset +pipeline. Another is that you won't be able to take advantage of the variables +and mixins that Bootstrap and Less provide as a framework, which really helps +keep your assets well organized. + +A better way to do this is to pull down a copy of the ArchivesSpace code and +build out a new theme. A good resource on how to do this is +[this video](https://www.youtube.com/watch?v=Uny736mZVnk) . +This video covers the staff frontend UI, but the same steps can be applied to +the public UI as well. + +Also become a little familiar with the +[build system instructions ](/development/dev) + +First, pull down a new copy of ArchivesSpace using git and be sure to checkout +a tag matching the version you're using or wanting to use. + +```shell +$ git clone https://github.com/archivesspace/archivesspace.git +$ git checkout v2.5.2 +``` + +You can start your application development server by executing: + +```shell +$ ./build/run bootstrap +$ ./build/run backend:devserver +$ ./build/run frontend:devserver +$ ./build/run public:devserver +``` + +**Note:** You don't have to run all these commands all the time. The bootstrap +command really only has to be run the first time your pull down the code -- +it will also take awhile. You also don't have to start the frontend or public +if you're not working on those interfaces. Backend does have to be started for +either the public or frontend interfaces to work. ) + +Follow the instructions in the video to create a new theme. A good way is to copy the existing default theme to a new folder and start making your updates. Be sure to take advantage of the existing variables set in the Less files to make your assets nice and organized. + +Once you've updated you theme and have got it working, you can package your application. You can use the ./scripts/build_release to build a totally fresh AS distribution, but you don't need to do that if you've simply made some minor changes to the UI. Instead, use the "./build/run public:war " to compile your assets and package a war file. You can then take this public.war file and replace your ASpace distribution war file. + +Be sure to update your theme setting in the config.rb file and restart ASpace. diff --git a/src/content/docs/es/customization/xsl.md b/src/content/docs/es/customization/xsl.md new file mode 100644 index 0000000..5ed0605 --- /dev/null +++ b/src/content/docs/es/customization/xsl.md @@ -0,0 +1,17 @@ +--- +title: XSL stylesheets +description: Provides information about the XSL stylesheets for transforming ArchivesSpace data to EAC-CPF and EAD exports into HTML or PDF, using Saxon for processing. +--- + +ArchivesSpace includes three stylesheets for you to transform exported data +into human-friendly formats. The stylesheets included are as follows: + +- `as-eac-cpf-html.xsl`: Generates HTML from EAC-CPF records +- `as-ead-html.xsl`: Generates HTML from EAD records +- `as-ead-pdf.xsl`: Generates XSL:FO output from EAD for transformation into PDF + +These stylesheets have been tested and are known to work with +[Saxon](http://saxonica.com/download/download_page.xml) 9.5.1.1 and higher. + +The `as-helper-functions.xsl` stylesheet is required by the other three +stylesheets listed above. diff --git a/src/content/docs/es/development/dev.md b/src/content/docs/es/development/dev.md new file mode 100644 index 0000000..b33f69d --- /dev/null +++ b/src/content/docs/es/development/dev.md @@ -0,0 +1,495 @@ +--- +title: Development environment +description: Guidance for setting up a development environment or ArchivesSpace, including system requirements, supported development platforms, a quickstart guide, and step-by-step instructions. +--- + +System requirements: + +- Java 17 +- [Docker](https://www.docker.com/) & [Docker Compose](https://docs.docker.com/compose/) is optional but makes running MySQL and Solr more convenient +- [Supervisord](http://supervisord.org/) is optional but makes running the development servers more convenient +- [mysql-client](https://www.bytebase.com/reference/mysql/how-to/how-to-install-mysql-client-on-mac-ubuntu-centos-windows/) is required in order to load demo data or other sql dumps onto the database + +Currently supported platforms for development: + +- Linux (although generally only Ubuntu is actually used / tested) +- macOS on Intel (x86_64) +- macOS on Apple silicon (ARM64) _since v4.0.0_ + +:::note[Apple silicon and ArchivesSpace before v4.0.0] +To install versions of ArchivesSpace prior to v4.0.0 with macOS on Apple silicon, see [https://teaspoon-consulting.com/articles/archivesspace-on-the-m1.html](https://teaspoon-consulting.com/articles/archivesspace-on-the-m1.html). +::: + +:::danger[Windows development not supported] +Windows is not supported because of issues building gems with C extensions (such as sassc). +::: + +When installing Java, [OpenJDK](https://openjdk.org/) is strongly recommended. Other vendors may work, but OpenJDK is most extensively used and tested. It is highly recommended that you use a version manager such as [mise](https://mise.jdx.dev/lang/java.html) to install Java (OpenJDK). This has proven to be a reliable way of resolving cross platform issues that have occured via other means of installing Java. + +Installing OpenJDK with mise will look something like: + +```bash +mise use -g java@openjdk-17 +``` + +On Linux/Ubuntu it is generally fine to install from system packages: + +```bash +sudo apt install openjdk-$VERSION-jdk-headless +# example: install 17 +sudo apt install openjdk-17-jdk-headless +# update-java-alternatives can be used to switch between versions +sudo update-java-alternatives --list +sudo update-java-alternatives --set $version +``` + +For [Homebrew](https://brew.sh/) users (macOS, Linux), the OpenJDK distribution from Azul has been reported to work: + +```bash +# install Java v17 for example +brew install --cask zulu@17 +``` + +If using Docker & Docker Compose install them following the official documentation: + +- [https://docs.docker.com/get-docker/](https://docs.docker.com/get-docker/) +- [https://docs.docker.com/compose/install/](https://docs.docker.com/compose/install/) + +_Do not use system packages or any other unofficial source as these have been found to be inconsistent with standard Docker._ + +The recommended way of developing ArchivesSpace is to fork the repository and clone it locally. + +_Note: all commands in the following instructions assume you are in the root directory of your local fork +unless otherwise specified._ + +**Quickstart** + +This is an abridged reference for getting started with a limited explanation of the steps: + +```bash +# Build images (required one time only for most use cases) +docker-compose -f docker-compose-dev.yml build +# Run MySQL and Solr in the background +docker-compose -f docker-compose-dev.yml up --detach +# Download the MySQL connector +cd ./common/lib && wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.30/mysql-connector-java-8.0.30.jar && cd - +# Download all application dependencies +./build/run bootstrap +# OPTIONAL: load dev database +gzip -dc ./build/mysql_db_fixtures/demo.sql.gz | mysql --host=127.0.0.1 --port=3306 -u root -p123456 archivesspace +# Setup the development database +./build/run db:migrate +# Clear out any existing Solr state (only needed after a database setup / restore after previous development) +./build/run solr:reset +# Run the development servers +supervisord -c supervisord/archivesspace.conf +# OPTIONAL: Run a backend (api) test (for checking setup is correct) +./build/run backend:test -Dexample="User model" +``` + +## Step by Step explanation + +### Run MySQL and Solr + +ArchivesSpace development requires MySQL and Solr to be running. The easiest and +recommended way to run them is using the Docker Compose configuration provided by ArchivesSpace. + +Start by building the images. This creates a custom Solr image that includes ArchivesSpace's configuration: + +```bash +docker-compose -f docker-compose-dev.yml build +``` + +_Note: you only need to run the above command once. You would only need to rerun this command if a) +you delete the image and therefore need to recreate it, or b) you make a change to ArchivesSpace's Solr +configuration and therefore need to rebuild the image to include the updated configuration._ + +Run MySQL and Solr in the background: + +```bash +docker-compose -f docker-compose-dev.yml up --detach +``` + +By using Docker Compose to run MySQL and Solr you are guaranteed to have the correct connection settings +and don't otherwise need to define connection settings for MySQL or Solr. + +Verify that MySQL & Solr are running: `docker ps`. It should list the running containers: + +```txt +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +ec76bd09d73b mysql:8.0 "docker-entrypoint.s…" 8 hours ago Up 8 hours 33060/tcp, 0.0.0.0:3307->3306/tcp as_test_db +30574171530f archivesspace/solr:latest "docker-entrypoint.s…" 8 hours ago Up 8 hours 0.0.0.0:8984->8983/tcp as_test_solr +d84a6a183bb0 archivesspace/solr:latest "docker-entrypoint.s…" 8 hours ago Up 8 hours 0.0.0.0:8983->8983/tcp as_dev_solr +7df930293875 mysql:8.0 "docker-entrypoint.s…" 8 hours ago Up 8 hours 0.0.0.0:3306->3306/tcp, 33060/tcp as_dev_db +``` + +To check the servers are online: + +- MYSQL: `mysql -h 127.0.0.1 -u as -pas123 archivesspace` +- SOLR: `curl http://localhost:8983/solr/admin/cores` + +To stop and / or remove the servers: + +```bash +docker-compose -f docker-compose-dev.yml stop # shutdowns the servers (data will be preserved) +docker-compose -f docker-compose-dev.yml rm # deletes the containers (all data will be removed) +``` + +**Advanced: running MySQL and Solr outside of Docker** + +You are not required to use Docker for MySQL and Solr. If you run them another way the default +requirements are: + +- dev MySQL, localhost:3306 create db: archivesspace, username: as, password: as123 +- test MySQL, localhost:3307 create db: archivesspace, username: as, password: as123 +- dev Solr, localhost:8983 create archivesspace core using ArchivesSpace configuration +- test Solr, localhost:8984, create archivesspace core using ArchivesSpace configuration + +The defaults can be changed using [environment variables](https://github.com/archivesspace/archivesspace/blob/master/build/build.xml#L43-L46) located in the build file. + +### Download the MySQL connector + +For licensing reasons the MySQL connector must be downloaded separately: + +```bash +cd ./common/lib +wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.30/mysql-connector-java-8.0.30.jar +cd - +``` + +### Run bootstrap + +The bootstrap task: + + ./build/run bootstrap + +Will bootstrap your development environment by downloading all +dependencies--JRuby, Gems, etc. This one command creates a fully +self-contained development environment where everything is downloaded +within the ArchivesSpace project `build` directory. + +_It is not necessary and generally incorrect to manually install JRuby +& bundler etc. for ArchivesSpace (whether with a version manager or +otherwise)._ + +_The self-contained ArchivesSpace development environment typically does +not interact with other J/Ruby environments you may have on your system +(such as those managed by rbenv or similar)._ + +This is the starting point for all ArchivesSpace development. You may need +to re-run this command after fetching updates, or when making changes to +Gemfiles or other dependencies such as those in the `./build/build.xml` file. + +**Errors running bootstrap** + +```txt + [java] INFO: jetty-9.4.44.v20210927; built: 2021-09-27T23:02:44.612Z; git: 8da83308eeca865e495e53ef315a249d63ba9332; jvm 11+28 + [java] Exiting + [java] LoadError: no such file to load -- rails/commands + [java] require at org/jruby/RubyKernel.java:974 + [java] <main> at script/rails:8 +``` + + ./build/run backend:devserver + ./build/run frontend:devserver + ./build/run public:devserver + ./build/run indexer + +There have been various forms of the same `LoadError`. It's a transient error +that is resolved by rerunning bootstrap. + +```txt + [java] org.jruby.Main -I uri:classloader://META-INF/jruby.home/lib/ruby/stdlib -r + [java] ./siteconf20220407-5224-13f6qi7.rb extconf.rb + [java] sh: /Library/Internet: No such file or directory + [java] sh: line 0: exec: /Library/Internet: cannot execute: No such file or directory + [java] + [java] extconf failed, exit code 126 +``` + +This has been seen on Mac platforms resulting from the installation method +for Java. Installing the OpenJDK via Jabba has been effective in resolving +this error. + +**Advanced: bootstrap & the build directory** + +Running bootstrap will download jars to the build directory, including: + +- jetty-runner +- jruby +- jruby-rack + +Gems will be downloaded to: `./build/gems/jruby/$version/gems/`. + +### Setup the development database + +The migrate task: + +```bash +./build/run db:migrate +``` + +Will setup the development database, creating all of the tables etc. +required by the application. + +There is a task for resetting the database: + +```bash +./build/run db:nuke +``` + +Which will first delete then migrate the database. + +### Loading data fixtures into dev database + +When loading a database into the development MySQL instance always ensure that ArchivesSpace +is **not** running. Stop ArchivesSpace if it is running. Run `./build/run solr:reset` to +clear indexer state (a more thorough explanation of this step is described below). + +If you are loading a database and MySQL has already been used for development you'll want to +drop and create an empty database first. + +```bash +mysql -h 127.0.0.1 -u as -pas123 -e "DROP DATABASE archivesspace" +mysql -h 127.0.0.1 -u as -pas123 -e "CREATE DATABASE IF NOT EXISTS archivesspace DEFAULT CHARACTER SET utf8mb4" +``` + +_Note: you can skip the above step if MySQL was just started for the first time or any time you +have an empty ArchivesSpace (one where `db:migrate` has not been run)._ + +Assuming you have MySQL running and an empty `archivesspace` database available you can proceed +to restore: + +```bash +gzip -dc ./build/mysql_db_fixtures/blank.sql.gz | mysql --host=127.0.0.1 --port=3306 -u root -p123456 archivesspace +./build/run db:migrate +``` + +_Note: The above instructions should work out-of-the-box. If you want to use your own database +and / or have configured MySQL differently then adjust the commands as needed._ + +After the restore `./build/run db:migrate` is run to catch any migration updates. You can now +proceed to run the application dev servers, as described below, with data already +populated in ArchivesSpace. + +### Clear out existing Solr state + +The Solr reset task: + +```bash +./build/run solr:reset +``` + +Will wipe out any existing Solr state. This is not required when setting +up for the first time, but is often required after a database reset (such as +after running the `./build/run db:nuke` task). + +_More specifically what this does is submit a delete all request to Solr and empty +out the contents of the `./build/dev/indexer*_state` directories, which is described +below._ + +### Run the development servers + +Use [Supervisord](http://supervisord.org/) for a simpler way of running the development servers with output +for all servers sent to a single terminal window: + +```bash +# run all of the services +supervisord -c supervisord/archivesspace.conf + +# run in api mode (backend + indexer only) +supervisord -c supervisord/api.conf + +# run just the backend (useful for trying out endpoints that don't require Solr) +supervisord -c supervisord/backend.conf +``` + +ArchivesSpace is started with: + +- the staff interface on [http://localhost:3000/](http://localhost:3000/) +- the PUI on [http://localhost:3001/](http://localhost:3001/) +- the API on [http://localhost:4567/](http://localhost:4567/) + +To stop supervisord: `Ctrl-c`. + +#### Advanced: running the development servers directly + +Supervisord is not required, or ideal for every situation. You can run the development +servers directly via build tasks: + +```bash +./build/run backend:devserver # This is the REST API +./build/run frontend:devserver # This is the staff user interface +./build/run public:devserver # This is the public user interface +./build/run indexer # This is the indexer (converts ASpace records to Solr Docs and ships to Solr) +``` + +These should be run in different terminal sessions and do not need to be run +in a specific order or are all required. + +_An example use case for running a server directly is to use the pry debugger._ + +#### Advanced: debugging with pry + +To debug with pry you cannot use supervisord to run the application devserver, +however you can mix and match: + +```bash +# run the backend and indexer with supervisord +supervisord -c supervisord/api.conf + +# in a separate terminal run the frontend directly +./build/run frontend:devserver +``` + +Add `require 'pry-debugger-jruby'; binding.pry` to set breakpoints in the code. This can also be used in views: +`<% require 'pry-debugger-jruby'; binding.pry %>`. Using pry you can easily inspect the `request`, `params` and +in scope instance variables that are available. Typical debugger commands are available: + +- `step`: Step execution into the next line or method. Takes an optional numeric argument to step multiple times. +- `next`: Step over to the next line within the same frame. Takes an optional numeric argument to step multiple times. Differs from step in that it always stays within the same frame (e.g. does not go into other method calls). +- `finish`: Execute until current stack frame returns. +- `continue`: Continue program execution and end the Pry session. +- `puts caller.join("\n")`: Get the current stacktrace. + +See also [pry-debugger-jruby docs](https://gitlab.com/ivoanjo/pry-debugger-jruby). + +#### Advanced: development servers and the build directory + + ./build/run db:migrate + +Running the developments servers will create directories in `./build/dev`: + +- indexer_pui_state: latest timestamps for PUI indexer activity +- indexer_state: latest timestamps for (SUI) indexer activity +- shared: background job files + + ./build/run db:nuke + +_Note: the folders will be created as they are needed, so they may not all be present +at all times._ + +#### Accessing development servers from other devices on the local network + +You can access the ArchivesSpace development servers from other devices on your local network. This is especially useful for testing on mobile operating systems. + +##### Prerequisites + +1. Your development machine and the other device must be on the same WiFi network +2. The ArchivesSpace development servers must be running on the development machine + +##### Steps + +1. Get your development machine's local IP address + + On macOS: + + ```bash + ipconfig getifaddr en0 + ``` + + On Linux: + + ```bash + hostname -I | awk '{print $1}' + ``` + + This returns something like `134.192.0.47`. + +2. Start the [development servers](#run-the-development-servers) + + The development servers bind to `0.0.0.0` by default, making them accessible from other devices on the network (see the [frontend binding example](https://github.com/archivesspace/archivesspace/blob/f77dec627cd1feac77e4b67f9242d617efe80e94/build/build.xml#L899)). + +3. **Access from another device** + + On the other device, open a web browser and navigate to your development machine's IP address with the appropriate port, ie: `http://<your-local-ip>:<port>/`. + + So for IP address `134.192.0.47`: + - Staff interface: `http://134.192.0.47:3000/` + - Public interface: `http://134.192.0.47:3001/` + - API: `http://134.192.0.47:4567/` + +## Running the tests + +### Backend tests + +_By default the tests are configured to run using a separate MySQL & Solr from the +development servers. This means that the development and test environments will not +interfere with each other._ + +```bash +# run the backend / api tests +./build/run backend:test +``` + +You can also run a single spec file with: + +```bash +./build/run backend:test -Dspec="myfile_spec.rb" +``` + +Or a single example with: + +```bash +./build/run backend:test -Dexample="does something important" +``` + +Or by file line with: + +```bash +./build/run backend:test -Dspec="myfile_spec.rb:123" +``` + +There are specific instructions and requirements for the [UI tests](/development/ui_test) to work. + +**Advanced: tests and the build directory** + +Running the tests may create directories in `./build/test`. These will be +the same as for the development servers as described above. + +## Coverage reports + +You can run the coverage reports using: + + ./build/run coverage + +This runs all of the above tests in coverage mode and, when the run +finishes, produces a set of HTML reports within the `coverage` +directory in your ArchivesSpace project directory. + +## Linting and formatting with Rubocop + +If you are editing or adding source files that you intend to contribute via a pull request, +you should make sure your changes conform to the layout and style rules by running: + + ./build/run rubocop + +Most errors can be auto-corrected by running: + + ./build/run rubocop -Dcorrect=true + +## Submitting a Pull Request + +When you have code ready to be reviewed, open a pull request to ask for it to be +merged into the codebase. + +To help make the review go smoothly, here are some general guidelines: + +- **Your pull request should address a single issue.** + It's better to split large or complicated PRs into discrete steps if possible. This + makes review more manageable and reduces the risk of conflicts with other changes. +- **Give your pull request a brief title, referencing any JIRA or Github issues resolved + by the pull request.** + Including JIRA numbers (e.g. 'ANW-123') explicitly in your pull request title ensures the + PR will be linked to the original issue in JIRA. Similarly, referencing GitHub issue numbers + (e.g. 'Fixes #123') will automatically close that issue when the PR is merged. +- **Fill out as much of the Pull Request template as is possible/relevant.** + This makes it easier to understand the full context of your PR, including any discussions or supporting documentation that went into developing the functionality or resolving the bug. + +## Building a distribution + +See: [Building an Archivesspace Release](/development/release) for information on building a distribution. + +## Generating API documentation + +See: [Building an Archivesspace Release](/development/release) for information on building the documentation. diff --git a/src/content/docs/es/development/docker.md b/src/content/docs/es/development/docker.md new file mode 100644 index 0000000..8168231 --- /dev/null +++ b/src/content/docs/es/development/docker.md @@ -0,0 +1,42 @@ +--- +title: Docker +description: A guide to using the Docker configuration with ArchivesSpace. +--- + +The [Docker](https://www.docker.com/) configuration is used to create [automated builds](https://hub.docker.com/r/archivesspace/archivesspace/) on Docker Hub, which are deployed to [the latest version](http://test.archivesspace.org) when the build completes. + +## Custom builds + +Run ArchivesSpace with MySQL, external Solr and a Web Proxy. Switch to the +branch you want to build: + +```bash +#if you already have running containers and want to clear them out +docker-compose stop +docker-compose rm + +#build the local image +docker-compose build # needed whenever the branch is changed and ready to test +docker-compose up + +#running specific containers +docker-compose up -d db solr # in background +docker-compose up app web # in foreground +>to access running container +docker exec -it archivesspace_app_1 bash +``` + +## Sharing an image + +To share the build image the easiest way is to create an account on [Docker Hub](https://hub.docker.com/). Next retag the image and push to the hub account: + +```bash +DOCKER_ID_USER=example +TAG=awesome-updates +docker tag archivesspace_app:latest $DOCKER_ID_USER/archivesspace:$TAG +docker push $DOCKER_ID_USER/archivesspace:$TAG +``` + +To download the image: `docker pull example/archivesspace:awesome-updates`. + +--- diff --git a/src/content/docs/es/development/e2e_tests.md b/src/content/docs/es/development/e2e_tests.md new file mode 100644 index 0000000..2a78b10 --- /dev/null +++ b/src/content/docs/es/development/e2e_tests.md @@ -0,0 +1,152 @@ +--- +title: ArchivesSpace End-to-End Test Suite +description: Instructions on running the end-to-end test suite. +--- + +For more context on the [End-to-End test suite](https://github.com/archivesspace/archivesspace/tree/master/e2e-tests) and how to contribute tests, see our [wiki-page](https://archivesspace.atlassian.net/wiki/spaces/ADC/pages/4606590990/How+to+contribute+End+to+End+test+scenarios). + +## Recommended setup + +### Using a version manager + +The required Ruby version for the e2e test application is documented in `[./.ruby-version](./.ruby-version)`. + +It is strongly recommended to use a version manager (such as [mise](https://mise.jdx.dev/)) to be able to switch to any version that a given project requires. + +#### mise + +We recommend using [mise](https://mise.jdx.dev/) to manage Ruby (and other runtimes). Installation instructions are available at [Getting started](https://mise.jdx.dev/getting-started.html). + +#### Alternatives to `mise` + +If you wish to use a different Ruby manager or installation method, see [Ruby's installation documentation](https://www.ruby-lang.org/en/documentation/installation/). + +### Installation + +From the ArchivesSpace root directory, navigate to the e2e test application, then install Ruby, Bundler, and the application dependencies: + +```sh +# 1. Navigate to e2e-tests directory +cd e2e-tests + +# 2. Install Ruby at the version specified in ./.tool-versions +mise install + +# 3. Install the Bundler dependency manager +gem install bundler + +# 4. Install project dependencies +bundle install +``` + +## Running the tests locally + +### Just working on the e2e tests with Docker + +If you are just working on e2e tests and not touching the ArchivesSpace application, you can run e2e tests locally against the latest ArchivesSpace `master` branch build using Docker. + +#### Install Docker Desktop + +[Docker Desktop](https://www.docker.com/get-started/) is a one-click-install application for Linux, Mac, and Windows. It provides both terminal and GUI access to Docker. Download and install the appropriate version for your operating system from the link above. You can also use alternative software for running Docker containers, such as [OrbStack](https://orbstack.dev/) for macOS. + +#### Run the latest ArchivesSpace Docker image + +```sh +# Get the latest ArchivesSpace `master` branch build +docker compose pull + +# Start ArchivesSpace servers +docker compose up +``` + +Verify the servers are running by opening [http://localhost:8080](http://localhost:8080) in a browser. + +### Working with an ArchivesSpace development environment + +You can run the e2e test suite against your local ArchivesSpace development environment. But be aware that your database, Solr index, and any configuration changes will need to be reset. + +#### Reset your database and Solr index + +Make sure your ArchivesSpace instance has a [blank database](https://docs.archivesspace.org/development/dev/#loading-data-fixtures-into-dev-database) and [blank solr index](https://docs.archivesspace.org/development/dev/#clear-out-existing-solr-state). + +#### Restore default configuration options (except for `AppConfig[:db_url]`) + +Make sure you override any local changes to the default configuration options (via ../common/config/config.rb) by commenting them out or deleting them, except for `AppConfig[:db_url]` (which is required for using the MySQL database). + +#### Run the frontend dev server + +Start the `frontend:devserver` as described [here](https://docs.archivesspace.org/development/dev/#run-the-development-servers). Verify it is running by opening [http://localhost:3000/](http://localhost:3000/) in your browser. + +#### Run the public dev server + +Start the `public:devserver` as described [here](https://docs.archivesspace.org/development/dev/#run-the-development-servers). Verify it is running by opening [http://localhost:3001/](http://localhost:3001/) in your browser. + +#### Set the `STAFF_URL` environment variable + +Set your `STAFF_URL` environment variable to point the e2e tests at the local development server: + +```sh +export STAFF_URL='http://localhost:3000' +``` + +#### Set the `PUBLIC_URL` environment variable + +Set your `PUBLIC_URL` environment variable to point the e2e tests at the local public interface: + +```sh +export PUBLIC_URL='http://localhost:3001' +``` + +## Running tests + +After setting the appropriate `STAFF_URL` and `PUBLIC_URL` environment variables as described above, run the desired test(s) according to the following commands. + +### All test files at once + +```sh +bundle exec cucumber staff_features/ +``` + +### All scenarios in a specific file + +```sh +bundle exec cucumber staff_features/assessments/assessment_create.feature +``` + +### A specific scenario in a specific file + +```sh +bundle exec cucumber staff_features/assessments/assessment_create.feature --name 'Assessment is created' +``` + +## Debugging + +Add a `byebug` statement in any `.rb` file to set a breakpoint and start a debugging session in the console while running. See more [here](https://github.com/deivid-rodriguez/byebug). Don't forget to remove any `byebug` statements before a `git push`... + +If you need to see the browser while running the test scenario and debugging, add a `HEADLESS=''` argument, as in: + +```sh +bundle exec cucumber HEADLESS='' staff_features/ +``` + +## Linters + +This test suite uses two linters, [`cuke_linter`](https://github.com/enkessler/cuke_linter) and [`rubocop`](https://rubocop.org/), to maintain code quality. + +```sh +# Lints Cucumber .feature files +bundle exec cuke_linter + +# Lints Ruby .rb files +bundle exec rubocop +``` + +## Editor integration (optional) + +ArchivesSpace provides optional VS Code workspace tasks that can run the end-to-end test suite without manually setting environment variables or changing directories. + +These tasks execute the same cucumber commands described above and are simply a convenience wrapper around the documented command-line workflow. + +Setup instructions are documented in the **VS Code guide** [here](https://docs.archivesspace.org/development/vscode/). + +Contributors not using VS Code can ignore this section and run the tests directly from the command line. diff --git a/src/content/docs/es/development/ead-exporter.md b/src/content/docs/es/development/ead-exporter.md new file mode 100644 index 0000000..55cc9cb --- /dev/null +++ b/src/content/docs/es/development/ead-exporter.md @@ -0,0 +1,31 @@ +--- +title: Repository EAD Exporter +description: A guide to export all published resources' EAD within a specified repository into a single zip archive. +--- + +Exports all published resource record EAD XML files associated with a single +repository into a zip archive. This zip file will be saved in the ArchivesSpace +data directory (as defined in `config.rb`) and include the repository id in the +filename. + +## Usage + +```sh +./scripts/ead_export.sh user password repository_id +``` + +A best practice would be to put the password in a hidden file such as: + +```sh +touch ~/.aspace_password +chmod 0600 ~/.aspace_password +vi ~/.aspace_password # enter your password +``` + +Then call the script like: + +```sh +./scripts/ead_export.sh user $(cat /home/user/.aspace_password) repository_id +``` + +This way you avoid directly exposing it on the command line or in crontab etc. diff --git a/src/content/docs/es/development/index.md b/src/content/docs/es/development/index.md new file mode 100644 index 0000000..e0fdd9d --- /dev/null +++ b/src/content/docs/es/development/index.md @@ -0,0 +1,13 @@ +--- +title: Development +description: The index to the development section of the ArchivesSpace technical documentation. +--- + +- [Running a development version of ArchivesSpace](./dev.html) +- [Building an ArchivesSpace release](./release.html) +- [Docker](./docker.html) +- [DB versions listed by release](./release_schema_versions.html) +- [User Interface Test Suite](./ui_test.html) +- [Upgrading Rack for ArchivesSpace](./development/jruby-rack-build.html) +- [ArchivesSpace Releases](./releases.html) +- [Using the VS Code editor for local development](./vscode.html) diff --git a/src/content/docs/es/development/jruby-rack-build.md b/src/content/docs/es/development/jruby-rack-build.md new file mode 100644 index 0000000..9db3b5e --- /dev/null +++ b/src/content/docs/es/development/jruby-rack-build.md @@ -0,0 +1,96 @@ +--- +title: Upgrading Rack +description: A guide to upgrading Rack. +--- + +- Install local JRuby (match aspace version, currently: 9.2.12.0) and switch to it. +- Install Maven. +- Download jruby-rack. + +```shell +git checkout 1.1-stable +# install bundler version to match 1.1-stable Gemfile.lock +gem install bundler --version=1.14.6 +``` + +Should result in: + +``` +Fetching bundler-1.14.6.gem +Successfully installed bundler-1.14.6 +Parsing documentation for bundler-1.14.6 +Installing ri documentation for bundler-1.14.6 +Done installing documentation for bundler after 5 seconds +1 gem installed +``` + +Set environment to target rack version (the version being upgraded to): + +```shell +export RACK_VERSION=2.2.3 +bundle +``` + +Should result in: + +``` +Fetching gem metadata from https://rubygems.org/............. +Fetching version metadata from https://rubygems.org/.. +Resolving dependencies... +Installing rake 10.4.2 +Using bundler 1.14.6 +Using diff-lcs 1.2.5 +Installing jruby-openssl 0.9.21 (java) +Using rack 2.2.3 (was 1.6.8) +Using rspec-core 2.14.8 +Using rspec-mocks 2.14.6 +Using appraisal 0.5.2 +Using rspec-expectations 2.14.5 +Using rspec 2.14.1 +Bundle complete! 5 Gemfile dependencies, 10 gems now installed. +Use `bundle show [gemname]` to see where a bundled gem is installed. +``` + +This will have bumped the Rack version in Gemfile.lock: + +```diff +diff --git a/Gemfile.lock b/Gemfile.lock +index 493c667..f016925 100644 +--- a/Gemfile.lock ++++ b/Gemfile.lock +@@ -6,7 +6,7 @@ GEM + rake + diff-lcs (1.2.5) + jruby-openssl (0.9.21-java) +- rack (1.6.8) ++ rack (2.2.3) + rake (10.4.2) + rspec (2.14.1) + rspec-core (~> 2.14.0) +@@ -23,7 +23,7 @@ PLATFORMS + DEPENDENCIES + appraisal + jruby-openssl (~> 0.9.20) +- rack (~> 1.6.8) ++ rack (= 2.2.3) + rake (~> 10.4.2) + rspec (~> 2.14.1) +``` + +Build the jruby-rack jar: + +```bash +bundle exec jruby -S rake clean gem SKIP_SPECS=true +``` + +This creates `target/jruby-rack-1.1.21.jar` with Rack 2.2.3. + +Upload the jar to the public s3 bucket, specifying the rack version: + +```bash +aws s3 cp target/jruby-rack-1.1.21.jar \ + s3://as-public-shared-files/jruby-rack-1.1.21_rack-2.2.3.jar \ + --profile archivesspace +``` + +Finally, update `rack_version` in the aspace `build.xml` file. diff --git a/src/content/docs/es/development/release.md b/src/content/docs/es/development/release.md new file mode 100644 index 0000000..b157437 --- /dev/null +++ b/src/content/docs/es/development/release.md @@ -0,0 +1,263 @@ +--- +title: Building a release +description: How to build an ArchivesSpace release. +--- + +- [Pre-release steps](#pre-release-steps) +- [Build the docs](#build-and-publish-the-api-and-yard-docs) +- [Build the release](#building-a-release-yourself) +- [Post the release with release notes](#create-the-release-with-notes) +- [Post-release updates](#post-release-updates) + +## Clone the git repository + +When building a release it is important to start from a clean repository. The +safest way of ensuring this is to clone the repo: + +```shell +git clone https://github.com/archivesspace/archivesspace.git +``` + +## Checkout the release branch and create release tag + +If you are building a major or minor version (see [https://semver.org](https://semver.org)), +start by creating a branch for the release and all future patch releases: + +```shell +git checkout -b release-v1.0.x +git tag v1.0.0 +``` + +If you are building a patch version, just check out the existing branch and see below: + +```shell +git checkout release-v1.0.x +``` + +Patch versions typically arise because a regression or critical bug has arisen since +the last major or minor release. We try to ensure that the "hotfix" is merged into both +master and the release branch without the need to cherry-pick commits from one branch to +the other. The reason is that cherry-picking creates a new commit (with a new commit id) +that contains identical changes, which is not optimal for the repository history. + +It is therefore preferable to start from the release branch when creating a "hotfix" +that needs to be merged into both the release branch and master. The Pull Request should +then be based on the release branch. After that Pull Request has been through Code review, +QA and merged, a second Pull Request should be created to merge the updated release branch +to master. + +Consider the following scenario. The current production release is v1.0.0 and a critical +bug has been discovered. In the time since v1.0.0 was released, new features have been +added to the master branch, intended for release in v1.1.0: + +```shell +git checkout -b oh-no-some-migration-corrupts-some-data origin/release-v1.0.0 +( fixes problem ) +git commit -m "fix bad migration and add a migration to repair corrupted data" +gh pr create -B release-v1.0.x --web +( PR is reviewed and merged to the release branch) +git checkout release-v1.0.x +git pull +git tag v1.0.1 +gh pr create -B master --web +( PR is reviewed and merged to the master branch) +``` + +## Pre-release steps + +### Run the ArchivesSpace rake tasks to check for issues + +Before proceeding further, it’s a good idea to check that there aren’t missing +translations or multiple gem versions. + +1. Bootstrap your current development environment on the latest master branch + by downloading all dependencies--JRuby, Gems, Solr, etc. + + ```shell + build/run bootstrap + ``` + +2. Run the following checks (recommended): + + ```shell + build/run rake -Dtask=check:multiple_gem_versions + ``` + +3. If multiple gem versions are reported, that should be addressed prior to moving on. + +## Build and publish the API and Yard Docs + +API docs are built using the submodule in `docs/slate` and Docker. +YARD docs are built using the YARD gem. At this time, they cover a small +percentage of the code and are not especially useful. + +### Build the API docs + +1. API documentation depends on the [archivesspace/slate](https://github.com/archivesspace/slate) submodule + and on Docker. Slate cannot run on JRuby. + + ```shell + git submodule init + git submodule update + ``` + +2. Run the `doc:api` task to generate Slate API and Yard documentation. (Note: the + API generation requires a DB connection with standard enumeration values.) + + ```shell + ARCHIVESSPACE_VERSION=X.Y.Z APPCONFIG_DB_URL=$APPCONFIG_DB_URL build/run doc:api + ``` + + This generates `docs/slate/source/index.html.md` (Slate source document). + +3. (Optional) Run a docker container to preview API docs. + + ```shell + docker-compose -f docker-compose-docs.yml up + ``` + + Visit `http://localhost:4568` to preview the api docs. + +4. Build the static api files. The api markdown document should already be in `docs/slate/source` (step 2 above). + The api markdown will be rendered to html and moved to `docs/build/api`. + ```shell + docker run --rm --name slate -v $(pwd)/docs/build/api:/srv/slate/build -v $(pwd)/docs/slate/source:/srv/slate/source slatedocs/slate build + ``` + +### Build the YARD docs + +1. Build the YARD docs in the `docs/build/doc` directory: + + ```shell + ./build/run doc:yardoc + ``` + +### Commit built docs and push to Github pages + +1. Double check that you are on a release branch (we don't need this stuff in master). Commit the newly built documentation and push it in the `gh-pages` branch only: + + ```shell + git add docs/build + git commit -m "release-vx.y.z api and yard documentation" + ``` + + Use `git subtree` to push the documentation to the `gh-pages` branch: + + ```shell + git subtree push --prefix docs/build origin gh-pages + ``` + + Published documents should appear a short while later at: + [http://archivesspace.github.io/archivesspace/api](http://archivesspace.github.io/archivesspace/api) + [http://archivesspace.github.io/archivesspace/doc](http://archivesspace.github.io/archivesspace/doc) + + Note: if the push command fails you may need to delete `gh-pages` in the remote repo: + + ```shell + git push origin :gh-pages + ``` + + **Note:** do not push the docs/build directory to the release branch, as it is only meant to be maintained in the `gh-pages` branch. + +## Building a release yourself + +1. Building the actual release is very simple. Run the following: + + ```shell + ./scripts/build_release vX.X.X + ``` + + Replace X.X.X with the version number. This will build and package a release + in a zip file. + +## Building a release on Github + +1. There is no need to build the release yourself. Just push your tag to Github + and trigger the `release` workflow: + ```shell + git push vX.X.X + ``` + Replace X.X.X with the version number. The release will be created as a **draft**, it will not be automatically published. + +## Create the Release with Notes + +### Build the release notes + +**As of v3.4.0, it should no longer necessary to build release notes manually.** + +To manually generate release notes: + +Create a deployment token on your [github developer settings](https://github.com/settings/tokens). + +```shell +export GITHUB_TOKEN={YOUR DEPLOYMENT TOKEN ON GITHUB} +./build/run doc:release_notes -Dcurrent_tag=v3.4.0 -Doutfile=RELEASE_NOTES.md -Dtoken=$GITHUB_TOKEN +``` + +#### Edit Release Page As Neccessary + +If there are any special considerations add them to the release page manually. Special considerations +might include changes that will require 3rd party plugins to be updated or a +that a full reindex is required. + +Example content: + +```md +This release requires a **full reindex** of ArchivesSpace for all functionality to work +correctly. Please follow the [instructions for reindexing](/administration/indexes) +before starting ArchivesSpace with the new version. +``` + +## Post release updates + +After a release has been put out it's time for some maintenance before the next +cycle of development clicks into full gear. Consider the following, depending on +current team consensus: + +### Branches + +Delete merged and stale branches in Github as appropriate. + +### Milestones + +Close the just-released Milestone, adding a due date of today's date. Create a +new Milestone for the anticipated next release (this can be changed later if the +version numbering is changed for some reason). + +### Test Servers + +Review existing test servers, and request the removal of any that are no longer +needed (e.g. feature branches that have been merged for the release). + +### GitHub Issues + +Review existing opening GH issues and close any that have been resolved by +the new release (linking to a specific PR if possible). For the remaining open +issues, review if they are still a problem, apply labels, link to known JIRA +issues, and add comments as necessary/relevant. + +### Accessibility Scan + +Run accessibility scans for both the public and staff sites and file a ticket +for any new and ongoing accessibility errors. + +### PR Assignments + +Begin assigning queued PRs to members of the Core Committers group, making +sure to include the appropriate milestone for the anticipated next release. + +### Dependencies + +#### Gems + +Take a look at all the Gemfile.lock files ( in backend, frontend, public, +etc ) and review the gem versions. Pay close attention to the Rails & Friends +( ActiveSupport, ActionPack, etc ), Rack, and Sinatra versions and make sure +there have not been any security patch versions. There usually are, especially +since Rails sends fix updates rather frequently. + +To update the gems, update the version in Gemfile, delete the Gemfile.lock, and +run ./build/run bootstrap to download everything. Then make sure your test +suite passes. + +Once everything passes, commit your Gemfiles and Gemfile.lock files. diff --git a/src/content/docs/es/development/release_schema_versions.md b/src/content/docs/es/development/release_schema_versions.md new file mode 100644 index 0000000..42a75d1 --- /dev/null +++ b/src/content/docs/es/development/release_schema_versions.md @@ -0,0 +1,41 @@ +--- +title: Database versions by release +description: A list of ArchivesSpace releases and their corresponding database versions. +--- + +| Release | DB Version | +| ------- | ---------- | +| 1.1.0 | 33 | +| 1.1.1 | 35 | +| 1.1.2 | 35 | +| 1.2.0 | 38 | +| 1.3.0 | 56 | +| 1.4.0 | 59 | +| 1.4.1 | 59 | +| 1.4.2 | 59 | +| 1.5.0 | 74 | +| 1.5.1 | 74 | +| 1.5.2 | 75 | +| 1.5.3 | 75 | +| 1.5.4 | 75 | +| 2.0.0 | 84 | +| 2.0.1 | 84 | +| 2.1.0 | 92 | +| 2.1.1 | 92 | +| 2.1.2 | 92 | +| 2.2.0 | 93 | +| 2.2.1 | 94 | +| 2.2.2 | 95 | +| 2.3.0 | 97 | +| 2.3.1 | 97 | +| 2.3.2 | 97 | +| 2.4.0 | 100 | +| 2.4.1 | 100 | +| 2.5.0 | 102 | +| 2.5.1 | 102 | +| 2.5.2 | 108 | +| 2.6.0 | 120 | +| 2.7.0 | 126 | +| 2.7.1 | 129 | +| 2.8.0 | 135 | +| 2.8.1 | 138 | diff --git a/src/content/docs/es/development/releases.md b/src/content/docs/es/development/releases.md new file mode 100644 index 0000000..2b31a65 --- /dev/null +++ b/src/content/docs/es/development/releases.md @@ -0,0 +1,192 @@ +--- +title: Releases +description: A list of Archivesspace releases, their release dates, schema numbers, and links to the release on github. +--- + +3.4.0 May 24, 2023 +The schema number for this release is 172. +https://github.com/archivesspace/archivesspace/tree/v3.4.0 + +3.3.1 Oct 4, 2022 +The schema number for this release is 164 +https://github.com/archivesspace/archivesspace/tree/v3.3.1 + +3.2.0 February 4, 2022 +The schema number for this release is 159. +https://github.com/archivesspace/archivesspace/releases/download/v3.2.0/archivesspace-v3.2.0.zip + +3.1.1 Novemver 19, 2021 +The schema number for this release is 157. +https://github.com/archivesspace/archivesspace/releases/download/v3.1.0/archivesspace-v3.1.1.zip + +3.1.0 September 20, 2021 +The schema number for this release is 157. +https://github.com/archivesspace/archivesspace/releases/download/v3.1.0/archivesspace-v3.1.0.zip + +3.0.2 August 11, 2021 +The schema number for this release is 148. +https://github.com/archivesspace/archivesspace/releases/download/v3.0.2/archivesspace-v3.0.2.zip + +3.0.1 June 4, 2021 +The schema number for this release is 147. +https://github.com/archivesspace/archivesspace/releases/download/v3.0.1/archivesspace-v3.0.1.zip + +3.0.0 May 10, 2021 +The schema number for this release is 147. +[Bug in Release] + +2.8.1 Nov 11, 2020. +The schema number for this release is 138. +https://github.com/archivesspace/archivesspace/releases/download/v2.8.1/archivesspace-v2.8.1.zip + +2.8.0 Jul 16, 2020. +The schema number for this release is 135. +https://github.com/archivesspace/archivesspace/releases/download/v2.8.0/archivesspace-v2.8.0.zip + +2.7.1 Feb 14, 2020. +The schema number for this release is 129. +https://github.com/archivesspace/archivesspace/releases/download/v2.7.1/archivesspace-v2.7.1.zip + +2.7.0 Oct 9, 2019. +The schema number for this release is 126. +https://github.com/archivesspace/archivesspace/releases/download/v2.7.0/archivesspace-v2.7.0.zip + +2.6.0 May 30, 2019. +The schema number for this release is 120. +https://github.com/archivesspace/archivesspace/releases/download/v2.6.0/archivesspace-v2.6.0.zip + +2.5.2 Jan 15, 2019. +The schema number for this release is 108. +https://github.com/archivesspace/archivesspace/releases/download/v2.5.2/archivesspace-v2.5.2.zip + +2.5.1 Oct 17, 2018. +This release includes no new database migrations. +https://github.com/archivesspace/archivesspace/releases/download/v2.5.1/archivesspace-v2.5.1.zip + +2.5.0 Aug 10, 2018. +The schema number for this release is 102. +https://github.com/archivesspace/archivesspace/releases/download/v2.5.0/archivesspace-v2.5.0.zip + +2.4.1 Jun 22, 2018. +This release includes no new database migrations. +https://github.com/archivesspace/archivesspace/releases/download/v2.4.1/archivesspace-v2.4.1.zip + +2.4.0 Jun 7, 2018. +The schema number for this release is 100. +https://github.com/archivesspace/archivesspace/releases/download/v2.4.0/archivesspace-v2.4.0.zip + +2.3.2 Mar 27, 2018. +This release includes no new database migrations. +https://github.com/archivesspace/archivesspace/releases/download/v2.3.2/archivesspace-v2.3.2.zip + +2.3.1 Feb 28, 2018. +This release includes no new database migrations. +https://github.com/archivesspace/archivesspace/releases/download/v2.3.1/archivesspace-v2.3.1.zip + +2.3.0 Feb 5, 2018. +The schema number for this release is 97. +https://github.com/archivesspace/archivesspace/releases/download/v2.3.0/archivesspace-v2.3.0.zip + +2.2.2 Dec 13, 2017. +The schema number for this release is 95. +https://github.com/archivesspace/archivesspace/releases/download/v2.2.2/archivesspace-v2.2.2.zip + +2.2.0 Oct 12, 2017. +The schema number for this release is 93. +https://github.com/archivesspace/archivesspace/releases/download/v2.2.0/archivesspace-v2.2.0.zip + +2.1.2 Sep 1, 2017. +The schema number for this release is 92. +https://github.com/archivesspace/archivesspace/releases/download/v2.1.2/archivesspace-v2.1.2.zip + +2.1.1 Aug 16, 2017. +The schema number for this release is 92. +https://github.com/archivesspace/archivesspace/releases/download/v2.1.1/archivesspace-v2.1.1.zip + +2.1.0 Jul 18, 2017. +The schema number for this release is 92. +https://github.com/archivesspace/archivesspace/releases/download/v2.1.0/archivesspace-v2.1.0.zip + +2.0.1 May 2, 2017. +The schema number for this release is 84. +https://github.com/archivesspace/archivesspace/releases/download/v2.0.1/archivesspace-v2.0.1.zip + +2.0.0 Apr 18, 2017. +The schema number for this release is 84. +https://github.com/archivesspace/archivesspace/releases/download/v2.0.0/archivesspace-v2.0.0.zip + +1.5.4 Mar 16, 2017. +The schema number for this release is 75. +https://github.com/archivesspace/archivesspace/releases/download/v1.5.4/archivesspace-v1.5.4.zip + +1.5.3 Feb 15, 2017. +The schema number for this release is 75. +https://github.com/archivesspace/archivesspace/releases/download/v1.5.3/archivesspace-v1.5.3.zip + +1.5.2 Dec 8, 2016. +The schema number for this release is 75. +https://github.com/archivesspace/archivesspace/releases/download/v1.5.2/archivesspace-v1.5.2.zip + +1.5.1 Jul 29, 2016. +The schema number for this release is 74. +https://github.com/archivesspace/archivesspace/releases/download/v1.5.1/archivesspace-v1.5.1.zip + +1.5.0 Jul 20, 2016. +The schema number for this release is 74. +https://github.com/archivesspace/archivesspace/releases/download/v1.5.0/archivesspace-v1.5.0.zip + +1.4.2 Oct 27, 2015. +The schema number for this release is 59. +https://github.com/archivesspace/archivesspace/releases/download/v1.4.2/archivesspace-v1.4.2.zip + +1.4.1 Oct 13, 2015. +The schema number for this release is 59. +https://github.com/archivesspace/archivesspace/releases/download/v1.4.1/archivesspace-v1.4.1.zip + +1.4.0 Sep 29, 2015. +The schema number for this release is 59. +https://github.com/archivesspace/archivesspace/releases/download/v1.4.0/archivesspace-v1.4.0.zip + +1.3.0 Jun 30, 2015. +The schema number for this release is 56. +https://github.com/archivesspace/archivesspace/releases/download/v1.3.0/archivesspace-v1.3.0.zip + +1.2.0 Mar 30, 2015. +The schema number for this release is 38. +https://github.com/archivesspace/archivesspace/releases/download/v1.2.0/archivesspace-v1.2.0.zip + +1.1.2 Jan 21, 2015. +The schema number for this release is 35. +https://github.com/archivesspace/archivesspace/releases/download/v1.1.2/archivesspace-v1.1.2.zip + +1.1.1 Jan 6, 2015. +The schema number for this release is 35. +https://github.com/archivesspace/archivesspace/archive/refs/tags/v1.1.1.zip (only source available) + +1.1.0 Oct 20, 2014. +The schema number for this release is 33. +https://github.com/archivesspace/archivesspace/releases/download/v1.1.0/archivesspace-v1.1.0.zip + +1.0.9 May 13, 2014. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.9/archivesspace-v1.0.9.zip + +1.0.7.1 March 7, 2014. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.7.1/archivesspace-v1.0.7.1.zip + +1.0.4 Jan 14, 2014. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.4/archivesspace-v1.0.4.zip + +1.0.2 Nov 26, 2013. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.2/archivesspace-v1.0.2.zip + +1.0.1 Nov 1, 2013. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.1/archivesspace-v1.0.1.zip + +1.0.0 Oct 4, 2013. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.0/archivesspace-v1.0.0.zip diff --git a/src/content/docs/es/development/ui_test.md b/src/content/docs/es/development/ui_test.md new file mode 100644 index 0000000..c64d6a6 --- /dev/null +++ b/src/content/docs/es/development/ui_test.md @@ -0,0 +1,140 @@ +--- +title: UI tests +description: Instructions on running automated browser tests with Selenium on the ArchivesSpace UI on both Firefox and Chrome. +--- + +ArchivesSpace's staff and public interfaces use [Selenium](http://docs.seleniumhq.org/) to run automated browser tests. These tests can be run using [Firefox via geckodriver](https://firefox-source-docs.mozilla.org/testing/geckodriver/geckodriver/index.html) and [Chrome](https://sites.google.com/a/chromium.org/chromedriver/home) (either regular Chrome or headless). + +## UI tests with firefox (default) + +Firefox is the default used in our [CI workflows](https://github.com/archivesspace/archivesspace/actions). + +On Ubuntu Linux 22.04 or later, the included Firefox deb package is a transition package that actually installs Firefox through [snap](https://snapcraft.io/). Snap has security restrictions that do not work with automated testing without additional configuration. + +To uninstall the Firefox snap package and reinstall it as a traditional deb package on Ubuntu Linux use: + +```bash +# remove old snap firefox package (if installed) +sudo snap remove firefox + +# create a keyring directory (if not existing) +sudo install -d -m 0755 /etc/apt/keyrings + +# download mozilla key and add it to the keyring +wget -q https://packages.mozilla.org/apt/repo-signing-key.gpg -O- | sudo tee /etc/apt/keyrings/packages.mozilla.org.asc > /dev/null + +# set high priority for the mozilla pakcages +echo "deb [signed-by=/etc/apt/keyrings/packages.mozilla.org.asc] https://packages.mozilla.org/apt mozilla main" | sudo tee -a /etc/apt/sources.list.d/mozilla.list > /dev/null +echo ' +Package: * +Pin: origin packages.mozilla.org +Pin-Priority: 1000 +' | sudo tee /etc/apt/preferences.d/mozilla + +# install firefox +sudo apt update && sudo apt install firefox +``` + +When using firefox, you need to make sure that the version of geckodriver you are using works with your firefox version, see this [compatibility table](https://firefox-source-docs.mozilla.org/testing/geckodriver/Support.html). Get your installed firefox version by running: `firefox --version`. + +On Linux, you can download the geckodriver version that corresponds to your firefox version [here](https://github.com/mozilla/geckodriver/releases). + +On Mac you can use: `brew install geckodriver`. + +## UI tests with Chrome + +To run using Chrome, you must first download the appropriate [ChromeDriver +executable](https://sites.google.com/a/chromium.org/chromedriver/downloads) +and place it somewhere in your OS system path. Mac users with Homebrew may accomplish this via `brew cask install chromedriver`. + +**Please note, you must have either Firefox or Chrome installed on your system to +run these tests. Consult the [Firefox WebDriver](https://developer.mozilla.org/en-US/docs/Web/WebDriver) +or [ChromeDriver](https://sites.google.com/a/chromium.org/chromedriver/home) +documentation to ensure your Selenium, driver, browser, and OS versions all match +and support each other.** + +## Before running: + +Run the bootstrap build task to configure JRuby and all required dependencies: + +```bash +$ cd .. +$ build/run bootstrap +``` + +Note: all example code assumes you are running from your ArchivesSpace project directory. + +## Running the tests: + +```bash +#Frontend tests +./build/run frontend:selenium # Firefox, headless +FIREFOX_OPTS= ./build/run frontend:selenium # Firefox, no-opts = heady + +SELENIUM_CHROME=true ./build/run frontend:selenium # Chrome, headless +SELENIUM_CHROME=true CHROME_OPTS= ./build/run frontend:selenium # Chrome, no-opts = heady + +#Public tests +./build/run public:test # Firefox, headless +FIREFOX_OPTS= ./build/run public:test # Firefox, no-opts = heady + +SELENIUM_CHROME=true ./build/run public:test # Chrome, headless +SELENIUM_CHROME=true CHROME_OPTS= ./build/run public:test # Chrome, no-opts = heady +``` + +Tests can be scoped to specific files or groups: + +```bash +./build/run .. -Dspec='path/to/spec/from/spec/directory' # single file +./build/run .. -Dexample='[description from it block]' # specific block + +#EXAMPLES +./build/run frontend:selenium -Dexample='Repository model' +FIREFOX_OPTS= ./build/run frontend:selenium -Dexample='Repository model'# Firefox, heady + +./build/run public:test -Dspec='features/accessibility_spec.rb' +SELENIUM_CHROME=true CHROME_OPTS= ./build/run public:test -Dspec='features/accessibility_spec.rb' # Chrome, heady +``` + +Test require a backend and a frontend service to be running. To ovoid the overhead of starting and stopping them while developing, you can run tests against a dev backend: + +```bash +# start mysql and solr containers: +docker-compose -f docker-compose-dev.yml up + +# start services: + supervisord -c supervisord/archivesspace.conf + +# run a spec using the started backend: +ASPACE_TEST_BACKEND_URL='http://localhost:4567' ./build/run frontend:test -Dpattern="./features/events_spec.rb" + +# run all examples that contain "can spawn" in their description: +./build/run frontend:test -Dpattern="./features/accessions_spec.rb" -Dexample="can spawn" +``` + +Note, however, that some tests are dependent on a sequence of ordered steps and may not always run cleanly in isolation. In this case, more than the example provided may be run, and/or unexpected fails may result. + +### Saved pages on spec failures + +When frontend specs fail, a screenshot and an html page is saved for each failed example under `frontend/tmp/capybara`. On the CI, a zip file will be available for each failed CI job run under Summary -> Artifacts. In order to load the assets (and not see plain html) when viewing the saved html pages, a dev server should be running locally on port 3000, see [Running a development version of ArchivesSpace](/development/dev). + +### Keeping the test database up to date + +When calling `./build/run frontend:test` to run frontend specs, the following steps happen before the actual specs run: + +- All tables of the test database are dropped: `./build/run db:nuke:test` +- `frontend/spec/fixtures/archivesspace-test.sql` is loaded to the test database: `./build/run db:load:test` +- Any not-yet-applied migrations are run: `./build/run db:migrate:test` + +#### Updating the test database dump + +If any migrations are being applied whenever you run one or all frontend specs, it means that the test database dump `frontend/spec/fixtures/archivesspace-test.sql` has stayed behind. A new test database dump can be created by running: + +```bash +./build/run db:nuke:test +./build/run db:load:test +./build/run db:migrate:test +./build/run db:dump:test +``` + +An updated `frontend/spec/fixtures/archivesspace-test.sql` will be created that can be committed and pushed to a Pull Request. diff --git a/src/content/docs/es/development/vscode.md b/src/content/docs/es/development/vscode.md new file mode 100644 index 0000000..729f336 --- /dev/null +++ b/src/content/docs/es/development/vscode.md @@ -0,0 +1,70 @@ +--- +title: Using the VS Code editor +description: Instructions for using the VS Code editor with ArchiveSpace, including prerequisites and setup. +--- + +ArchivesSpace provides a [VS Code settings file](https://github.com/archivesspace/archivesspace/blob/master/.vscode/settings.json) that makes it easy for contributors using VS Code to follow the code style of the project and work with the end-to-end tests. Using this tool chain in your editor helps fix code format and lint errors _before_ committing files or running tests. In many cases such errors will be fixed automatically when the file being worked on is saved. Errors that can't be fixed automatically will be highlighted with squiggly lines. Hovering your cursor over these lines will display a description of the error to help reach a solution. + +## Prerequisites + +1. [Node.js](https://nodejs.org) +2. [Ruby](https://www.ruby-lang.org/) +3. [VS Code](https://code.visualstudio.com/) + +## Set up VS Code + +### Add system dependencies + +1. [ESLint](https://eslint.org/) +2. [Prettier](https://prettier.io/) +3. [Rubocop](https://rubocop.org/) +4. [Stylelint](https://stylelint.io/) + +#### Rubocop + +```bash +gem install rubocop +``` + +See https://docs.rubocop.org/rubocop/installation.html for further information, including using Bundler. + +#### ESLint, Prettier, Stylelint + +Run the following command from the ArchivesSpace root directory. + +```bash +npm install +``` + +See [package.json](https://github.com/archivesspace/archivesspace/blob/master/package.json) for further details on how these tools are used in ArchivesSpace. + +### Add VS Code extensions + +Add the following extensions via the VS Code command palette or the Extensions panel. (See this [documentation for installing and managing extensions](https://code.visualstudio.com/learn/get-started/extensions)). + +1. [ESLint](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) (dbaeumer.vscode-eslint) +2. [Prettier](https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode) (esbenp.prettier-vscode) +3. [Ruby Rubocop Revised](https://marketplace.visualstudio.com/items?itemName=LoranKloeze.ruby-rubocop-revived) (LoranKloeze.ruby-rubocop-revived) +4. [Stylelint](https://marketplace.visualstudio.com/items?itemName=stylelint.vscode-stylelint) (stylelint.vscode-stylelint) + +Optional — for enhancing work with the end-to-end tests: + +5. [Cucumber](https://marketplace.visualstudio.com/items?itemName=CucumberOpen.cucumber-official) (CucumberOpen.cucumber-official) — see [End-to-end test integration](#end-to-end-test-integration), especially step-definition navigation. + +It's important to note that since these extensions work in tandem with the [VS Code settings file](https://github.com/archivesspace/archivesspace/blob/master/.vscode/settings.json), these settings only impact your ArchivesSpace VS Code Workspace, not your global VS Code User settings. + +The extensions should now work out of the box at this point providing error messages and autocorrecting fixable errors on file save! + +### End-to-end test integration + +The ArchivesSpace repository includes optional VS Code workspace configuration that integrates the Cucumber end-to-end test suite with the editor. The files [`.vscode/example.tasks.json`](https://github.com/archivesspace/archivesspace/blob/master/.vscode/example.tasks.json) and [`.vscode/example.settings.json`](https://github.com/archivesspace/archivesspace/blob/master/.vscode/example.settings.json) are not enabled by default, so they do not override your personal editor configuration. + +**Enable the tasks** + +Copy the example tasks file to `.vscode/tasks.json`. This adds a task that runs the e2e test suite with the correct working directory, Ruby environment, and environment variables. Run it via **Terminal → Run Task… → Cucumber: Run e2e-test** (the same command as in the [e2e test documentation](/development/e2e_tests)). You may optionally supply a feature file path, `file.feature:line`. + +**Step-definition navigation** + +Integrate the contents of `example.settings.json` into your existing `.vscode/settings.json` (do not replace the existing file, but merge the Cucumber-related settings if you desire to use them so your current workspace settings are preserved). + +This configures the Cucumber extension for `e2e-tests/**/*.feature` and shared Ruby step definitions, enabling jump-to-definition, undefined-step detection, and discovery of shared steps. This simplifies contributing new end-to-end tests. diff --git a/src/content/docs/es/index.mdx b/src/content/docs/es/index.mdx new file mode 100644 index 0000000..3d6ec85 --- /dev/null +++ b/src/content/docs/es/index.mdx @@ -0,0 +1,14 @@ +--- +title: ArchivesSpace Technical Documentation +description: Technical documentation for ArchivesSpace, the open source archives management tool. +tableOfContents: false +editUrl: false +issueUrl: false +lastUpdated: false +prev: false +next: false +--- + +import Homepage from '@components/HomePage.astro' + +<Homepage /> diff --git a/src/content/docs/es/migrations/migrate_from_archivists_toolkit.md b/src/content/docs/es/migrations/migrate_from_archivists_toolkit.md new file mode 100644 index 0000000..c45195b --- /dev/null +++ b/src/content/docs/es/migrations/migrate_from_archivists_toolkit.md @@ -0,0 +1,126 @@ +--- +title: Migrating from Archivists' Toolkit +description: Guidelines are for migrating data from Archivists' Toolkit 2.0 Update 16 to all ArchivesSpace 2.1.x or 2.2.x releases using the migration tool provided by ArchivesSpace. +--- + +These guidelines are for migrating data from Archivists' Toolkit 2.0 Update 16 to all ArchivesSpace 2.1.x or 2.2.x releases using the migration tool provided by ArchivesSpace. Migrations of data from earlier versions of the Archivists' Toolkit (AT) or other versions of ArchivesSpace are not supported by these guidelines or migration tool. + +> Note: A migration from Archivists' Toolkit to ArchivesSpace should not be run against an active production database. + +## Preparing for migration + +- Make a copy of the AT instance, including the database, to be migrated and use it as the source of the migration. It is strongly recommended that you not use your AT production instance and database as the source of the migration for the simple reason of protecting the production version from any anomalies that might occur during the migration process. +- Review your source database for the quality of the data. Look for invalid records, duplicate name and subject records, and duplicate controlled values. Irregular data will either be carried forward to the ArchivesSpace instance or, in some cases, block the migration process. +- Select a representative sample of accession, resource, and digital object records to be examined closely when the migration is completed. Make sure to represent in the sample both the simplest and most complicated or extensive records in the overall data collection. + +### Notes + +- An AT subject record will be set to type 'topical' if it does not have a valid AT type statement or its type is not one of the types in ArchivesSpace. Several other AT LookupList values are not present in ArchivesSpace. These LookupList values cannot be added during the AT migration process and will therefore need to be changed in AT prior to migration. For full details on enum (controlled value list) mappings see the data map. You can use the AT Lookup List tool to change values that will not map correctly, as specified by the data map. +- Record audit information (created by, date created, modified by, and date modified) will not migrate from AT to ArchivesSpace. ArchivesSpace will assign new audit data to each record as it is imported into ArchivesSpace. The exception to this is that the username of the user who creates an accession record will be migrated to the accession general note field. +- Implement an ArchivesSpace production version including the setting up of a MySQL database to migrate into. Instructions are included at [Getting Started with ArchivesSpace](/administration/getting_started) and [Running ArchivesSpace against MySQL](/provisioning/mysql). + +## Preparing for Migrating AT Data + +- The migration process is iterative in nature. A migration report is generated at the end of each migration routine. The report indicates errors or issues occurring with the migration. (An example of an AT migration report is provided at the end of this document.) You should use this report to determine if any problems observed in the migration results are best remedied in the source data or in the migrated data in the ArchivesSpace instance. If you address the problems in the source data, then you can simply conduct the migration again. +- However, once you accept the migration and address problems in the migrated data, you cannot migrate the source data again without establishing a new target ArchivesSpace instance. Migrating data to a previously migrated ArchivesSpace database may result in a great many duplicate record error messages and may cause unrecoverable damage to the ArchivesSpace database. +- Please note, data migration can be a very memory and time intensive task due to the large number of records being transferred. As such, we recommend running the AT migration on a computer with at least 2GB of available memory. +- Make sure your ArchivesSpace MySQL database is setup correctly, following the documentation in the ArchivesSpace README file. When creating a MySQL database, you MUST set the default character encoding for the database to be UTF8. This is particularly important if you use a MySQL client, such as Navicat, MySQL Workbench, phpMyAdmin, etc., to create the database. See [Running ArchivesSpace against MySQL](/provisioning/mysql) for more details. +- Increase the maximum Java heap space if you are experiencing time out events. To do so: + - Stop the current ArchivesSpace instance + - Open in a text editor the file "archivesspace.sh" (Linux / Mac OSX) or archivesspace.bat (Windows). The file is located in the ArchivesSpace installation directory. + - Find the text string "-Xmx512m" and change it to "-Xmx1024m". + - Save the file. + - Restart the ArchivesSpace instance. + - Restart the AT migration process. + +## Running the Migration Tool as an AT Plugin + +- Make sure that the AT instance you want to migrate from is shut down. Next, download the "scriptAT.zip" file from the at-migration release github page (https://github.com/archivesspace/at-migration/releases) and copy the file into the plugins folder of the AT instance, overwriting the one that's already there if needed. +- Make sure the ArchivesSpace instance that you are migrating into is up and running. +- Restart the AT instance to load the newly installed plug-in. To run the plug-in go to the "Tools" menu, then select "Script Runtime v1.0", and finally "ArchivesSpace Data Migrator". This will cause the plug-in window to display. + +![AT migrator](../../../../images/at_migrator.jpg) + +- Change the default information in the Migrator UI: + - **Threads** – Used to specify the number of clients that are used to copy Resource records simultaneously. The limit on the number of clients depends on the record size and allocated memory. A number from 4 to 6 is generally a good value to use, but can be reduced if an "Out of Memory Exception" occurs. + - **Host** – The URL and port number of the ArchivesSpace backend server + - **"Copy records when done" checkbox** – Used to specify that the records should + be copied once the repository check has completed. + - **Password** – password for the ArchivesSpace "admin" account. The default value + of "admin" should work unless it was changed by the ArchivesSpace + administrator. + - **Reset Password** – Each user account transferred has its password reset to this. + Please note that users need to change their password when they first log-in + unless LDAP is used for authentication. + - **"Specify Type of Extent Data" Radio button** – If you are using the BYU Plugin, + select that option. Otherwise, leave as the default – Normal or Harvard Plugin. + - **Specify Unlinked Records to NOT Copy checkboxes** – If you have name or + subject records that are not linked to accessions, resources, or digital objects, + you can choose not to migrate those to ArchivesSpace. + - **"Records to Publish?" checkboxes** – Used to specify what types of records + should be published after they are migrated to ArchivesSpace. + - **Text box showing -refid_unique, -term_default** – This is needed for the + functioning of the migration tool. Please do not make changes to this area. + - **Output Console** – Display section for following the migration while it is running + - **View Error Log** – Used to view a printout of all the errors encountered during the + migration process. This can be used while the migration process is underway as well. +- Once you have made the appropriate changes to the UI, there are three buttons to choose from to start the migration process. + - **Copy to ArchivesSpace** – This starts the migration to the ArchivesSpace instance + you have made the appropriate changes to the UI, there are three buttons to + indicated by the Host URL. + - **Run Repository Check** – The repository check searches for, and attempts to fix repository misalignment between Resources and linked Accession/Digital Object records. The fix applied entails copying the linked accession/digital object record to the repository of the resource record in the ArchivesSpace database (those record positions are not modified in the AT database). + + As long as accession records are not linked to multiple Resource records in different repositories, the fix will be valid. Otherwise, you will receive a warning message. For such cases, the Resource and Accession record(s) will still be migrated, but without links to one another; those links will need to be re-established in ArchivesSpace. + + This misalignment problem involves only accession and resource records and not digital object records, as accession and resource records have a many-to-many relationship. Assessments also can have a many-to-many relationship with resources, accessions, and digital objects. However, since assessments are small and quick to copy, they will simply be copied to as many repositories as needed to establish all the appropriate links. + + If the "Copy Records When Done" checkbox is selected, the records will be migrated to the ArchivesSpace instance once the check is completed. + + - **Continue Previous Migration** – If the migration process fails, this is used to skip to the place the failed previous migration left off. This should allow the migration process of resource records to be gracefully restarted without having to clean out the ArchivesSpace backend database and start from scratch. + +- For most part, the data migration process should be automatic, with an error log being generated when completed. However, depending on the particular data, various errors may occur that would require the migration to be re-run after they have been resolved by the user. The time a migration takes to complete will depend on a number of factors (database size, network performance etc.), but can be anywhere from a couple of hours to a few days. +- Data from the following AT modules will migrate: + - Lookup Lists + - Repositories + - Locations + - Users + - Subjects + - Names + - Accessions + - Digital Object and Digital Object Components + - Resources and Resource Components + - Assessments +- Data + - Reports from the following AT modules will not migrate + > INFORMATION MISSING FROM SOURCE DOCUMENT - NEEDS REVIEW!!! + +## Assessing the Migration and Cleaning Up Data + +Use the migration report to assess the fidelity of the migration and to determine whether to: + +- Fix data in the source AT instance and conduct the migration again, or +- Fix data in the target ArchivesSpace instance. + +If you select to fix the data in AT and conduct the migration again, you will need to delete all the content in the ArchivesSpace instance. + +If you accept the migration in the ArchivesSpace instance, the following outlines how to check and fix your data. + +- Re-establish user passwords. While user records will migrate, the passwords associated with them will not. You will need to re-assign those passwords according to the policies or conventions of your repositories. +- Review closely the set of sample records you selected: + - Accessions + - Resources + - Digital objects +- Review the following groups of records, making sure the correct number of records migrated: + - Accessions + - Assessments + - Resources + - Digital objects + - Controlled vocabulary lists + - Subjects + - Agents (Name records in AT) + - Locations + - Collection Management Classifications + - There may be a few extra agent records due to ArchivesSpace defaults or extra assessments if they were linked to records from more than one repository. +- In conducting the reviews, look for duplicate or incomplete records, broken links, or truncated data. +- Take special care to check to make sure your container data and locations are correct. The model for this is significantly different between AT and ArchivesSpace (where locations are tied to a container rather than directly to a resource or accession), so this presents some challenges for migration. +- Merge enumeration values as necessary. For instance, if you had both 'local' and 'local sources' as a source for names, it might be a good idea to merge these values. diff --git a/src/content/docs/es/migrations/migrate_from_archon.md b/src/content/docs/es/migrations/migrate_from_archon.md new file mode 100644 index 0000000..f0402fb --- /dev/null +++ b/src/content/docs/es/migrations/migrate_from_archon.md @@ -0,0 +1,180 @@ +--- +title: Migrating from Archon +description: Guidelines are for migrating data from Archon 3.21-rev3 to all ArchivesSpace 2.2.2 using the migration tool provided by ArchivesSpace. +--- + +These guidelines are for migrating data from Archon 3.21-rev3 to all ArchivesSpace 2.2.2 using the migration tool provided by ArchivesSpace. Migrations of data from earlier versions of the Archon or other versions of ArchivesSpace are not supported by these guidelines or migration tool. + +> Note: A migration from Archon to ArchivesSpace should not be run against an active production database. + +## Preparing for migration + +Select a representative sample of accession, classification, collection, collection content, and digital object records to be examined closely when the migration is completed. Make sure to include both simple and more complicated or extensive records in the sample. + +Review your Archon database for data quality + +### Accession Records + +- Supply an accession date for all records, when possible. If an accession date is not + recorded in Archon, the date of 01/01/9999 will be supplied during the migration process. If you wish to change this default value, you may do so by editing the following file in the new Archon distribution, prior to running the migration: + `packages/core/templates/default/accession-list.inc.php` +- Supply an identifier for all records, when possible. If an identifier is not recorded in Archon, a supplied identifier will be constructed during the migration process, consisting of the date and the truncated accession title. + +### Classification Records + +Ensure that there are no duplicate classification titles at the same level in the classification hierarchy. If the migration tool encounters a duplicate value, some of the save operations for classifications will fail, and you will need to redo the migration. + +### Collection Records + +If normalized dates are not recorded correctly (i.e. if the end date and begin date are reversed), they will not be migrated or may cause the migration to fail. To check for such entries, a system administrator can run the follow query against the database: + +`SELECT ID, Title, NormalDateBegin, NormalDateEnd FROM tblCollections_Collections WHERE NormalDateBegin > NormalDateEnd;` + +### Level/Container Manager + +Review the settings to make sure that each 'level container' is appropriately marked with the correct values for "Intellectual Level" and "Physical Container" and that EAD Values are correctly recorded. + +![Level Container Manager](../../../../images/archon_level.jpg) + +Failure to code level container values correctly may result in incorrect nesting of resource components in ArchivesSpace. While the following information does not need to be acted upon prior to migration, please note the following if you find that content is not nested correctly after you migrate: + +- Collection content records that have a level container that is 'Intellectual Only' will be migrated to ArchivesSpace as resource components. Each level/container that has 'intellectual level' checked should have a valid value recorded in the "EAD Level" field (i.e. class, collection, file, fonds, item, otherlevel, recordgrp, series, subfonds, subgrp, subseries). These values are case sensitive, and all other values will be migrated as "otherlevel" on the collection content/resource component records to which they apply. +- Collection content records that have a level container that is 'Physical Only' will be migrated to ArchivesSpace as instance records of the type 'text' attached to a container in ArchivesSpace. These instance/container records will be attached to the intellectual level or levels that are immediate children of the container record as it was previously expressed in Archon. If the instance/container has no children it will be attached to its parent intellectual level instead. For illustrative purposes, the following screenshots show a container record prior to and following migration. + ![Archon container example](../../../../images/archon_container.jpg) +- Collection content records that have both physical and intellectual levels will be migrated as both resource components and instances. In this case the instance will be attached to the resource component. +- Collection content records that are neither physical nor intellectual levels will be migrated as if they were 'Intellectual Only'. This is not recommended and should be fixed prior to migration. + +### Collection Content Records + +- If a value has not been set in the "Title" or "Inclusive Dates" field of an "intellectual" level/container in Archon, the collection content record being migrated will be supplied a title, based on its "label" value and the "level/container" type set in Archon. + ![Collection Content Records](../../../../images/archon_collection.jpg) +- Optionally, if a migration fails, check for collection content records that reference invalid 'level/containers'. These records are found in the database tables, but are not visible to staff or end users and must be eliminated prior to migration. If not eliminated, the migration will fail. In order to identify these records, you should follow these steps. **Be very careful. If you are uncertain what you are doing, backup the database first or speak with a systems administrator!** +- In MySQL or SQL Server, open the table titled 'tblCollections_LevelContainers'. Note the 'ID' value recorded of each row (i.e. LevelContainer). +- Run a query against tblCollections_Content to find records where the LevelID column references an invalid value. For example, if tblCollections_Level Containers holds 'ID' values1-6 and 8-22: + `SELECT * FROM tblCollections_Content WHERE LevelContainerID > 22 OR (LevelContainerID > 6 AND LevelContainerID < 8);` + This will provide a list of all records with invalid 'LevelID' (i.e. where a record with the primary key referenced by a foreign key cannot be found). Review this list carefully to make sure you are comfortable deleting the records, or change the LevelID to a valid integer if you wish to retain the records. If you choose to delete the records, you will need to do so directly in the database (see below.) If you choose to do the latter, you may need to take additional steps directly in the database to link these records to a valid parent content record or collection; additional instructions can be supplied upon request. +- Run a query to delete the invalid records from the collections content table. For example: + `DELETE FROM tblCollections_Content WHERE LevelContainerID > 22 OR (LevelContainerID > 6 AND LevelContainerID < 8);` +- Optionally, if the migration fails, check for 'duplicate' collection content records. 'Duplicate' records are those that occupy the same node in the collection/content hiearchy. To check for these records, run the following query in mysql or sql server. + `SELECT ParentID, SortOrder, COUNT (*) FROM tblCollections_Content GROUP BY ParentID, SortOrder HAVING COUNT(*) > 1;` +- The query above checks for records that occupy the same branch and same position in the content hierarchy. If you discover such records, the sort order value of one of the records must be changed, so that both records occupy a unique position. In order to do this, run a query that finds all records attached to the parent record, then run an update query to change the sort order of one of the offending records so that each has a unique sort order. For example if the query above returns ParentID as a 'duplicate' value, you would run query one with the appropriate ParentID value to identify the offending records, and query two to fix the problem: + **Query one:** + + `SELECT ID, ParentID, SortOrder, Title FROM tblCollections_Content WHERE ParentID=8619;` + + | ID | ParentID | SortOrder | Title | + | ---- | -------- | --------- | ----------- | + | 8620 | 8619 | 1 | to mother | + | 8621 | 8619 | 1 | from mother | + | 8622 | 8619 | 3 | to father | + | 6823 | 8619 | 4 | from father | + + **Query two:** + + `UPDATE tblCollections_Content SET SortOrder=2 WHERE ID=8621;` + +## Preparing for Migrating Archon Data + +The migration process is iterative in nature. You should plan to do several test migrations, culminating in a final migration. Typically, migration will require assistance from a system administrator. + +The migration tool will connect to your Archon installation, read data from defined 'endpoints', and place the information in a target ArchivesSpace instance. + +A migration report is generated at the end of each migration routine and can be downloaded from the application. The report indicates errors or issues occurring with the migration. Sample data from migration report is provided in [Appendix A](#Appendix-A%3A-Migration-Log-Review). + +You should use this report to determine if any problems observed in the migration results are best remedied in the source data or in the migrated data in the ArchivesSpace instance. If you address the problems in the source data, then you can simply clear the database and conduct the migration again. However, once you accept the migration and make changes to the migrated data in ArchivesSpace, you cannot migrate the source data again without either overwriting the previous migration or establishing a new target ArchivesSpace instance. + +Please note, data migration can be a very memory and time intensive task due to the large amounts of records being transferred. As such, we recommend running the Archon migration tool on a server with at least 2GB of available memory. Test migrations have run from under an hour to twelve hours or more in the case of complex and large instances of Archon. + +Before starting the migration process, make sure that your current Archon installation is up to date: i.e. that you are using version 3.21 rev3. If you are on an earlier version of Archon, make a copy of the Archon instance, including the database, to be migrated and use it as the source of the migration. It is strongly recommended that you not use your Archon production instance and database as the source of the migration for the simple reason of protecting the production version from any anomalies that might occur during the migration process. Upgrade the copy of the Archon instance to version 3.21 rev3 prior to starting the migration process. + +### Get Archon to ArchivesSpace Migration Tool + +Download the latest JAR file release from https://github.com/archivesspace-deprecated/ArchonMigrator/releases/latest. This is an executable JAR file – double click to run it. + +### Install ArchivesSpace Instance + +Implement an ArchivesSpace production version including the setting up of a MySQL database to migrate into. Instructions are included at [Getting Started with ArchivesSpace](/administration/getting_started) and [Running ArchivesSpace against MySQL](/provisioning/mysql) + +### Prepare to Launch Migration + +> **Important Note:** The migration process should be launched from a networked computer with a stable (i.e. wired) connection, and you should turn power save settings off on the client computer you use to launch the migration. So that the migration can proceed in an undisturbed fashion, you should not try to access the ArchivesSpace or Archon front end or public interface until after the migration as completed. **If you fail to follow these instructions, the migration tool may not provide useful feedback and it will be difficult to determine how successful the migration was.** + +For the most part, the data migration process should be automatic, with errors being provided as the tool migrates and a log being made available when migration is complete. Depending on the particular data being migrated, various errors may occur These may require the migration to be re-run after they have been resolved by the user. When this occurs, the MySQL database should be emptied by the system administrator, and the migration rerun after steps are taken to resolve the problem that caused the error. + +The time that the migration takes to complete will depend on a number of factors (database size, network performance etc.), but has been known to take anywhere from a half hour to ten or twelve hours. Most of this time will probably be spent migrating collection records. + +The following Archon datatypes will migrate, and all relationships that exist between these datatypes should be preserved in ArchivesSpace, except as noted in bold below. For each datatype, post- migration cleanup recommendations are provided in parentheses: + +- Editable controlled value lists: + - Subject sources (review post migration and merge values with ArchivesSpace defaults or functionally duplicate values, when possible) + - Creatorsources(reviewpostmigrationandmergevalueswithArchivesSpacedefaults + or functionally duplicate values, when possible) + - Extentunits/types(mergefunctionallyduplicatevalues) o MaterialTypes + - ContainerTypes + - FileTypes + - ProcessingPriorities +- Repositories +- User/logins (users will need to reset password) +- Subjects (subjects of type personal corporate or family name are migrated as Agent + records, and are linked to resources and digital objects in the subject role. Review these + records and merge with duplicate agent names from creator migration, when possible.) +- Creators/Names +- Accessions (The migration tool will supply accession identifiers when these are blank in Archon. Review and change values, if appropriate.) +- Digital Objects: The migration tool will generate digital object metadata records in ArchivesSpace for each digital library record that is stored in your Archon instance. For each file that has an attached digital library record, the migration tool will generate a digital object component and file instance record. In addition, the migration tool will provide a folder containing the source file you uploaded to Archon when you created the record. In order to link these files to the digital file records in ArchivesSpace, you should place the files in a single directory on a webserver. + **To preserve the linkage between the file's metadata in ArchivesSpace, you must provide the base URL to the folder where the objects will be placed.** The migration tool prepends this URL to the filename to form a complete path to the object location, for each file being exported, as shown in the screenshot below. (In version 2.2.2 of ArchivesSpace, with the default digital object templates, these files will be available in the public interface by clicking a link.) +- Locations (Controlled location records are much more granular in ArchivesSpace than in Archon. You should have a location record for each unique combination of location drop down, range, section, and shelf in Archon, and these records should be linked to top container records which are in turn linked to an instance for each collection where they apply.) +- Resources and Resource Components (see locations, above). + Data from the following Archon modules will not migrate to ArchivesSpace +- Books (Book data could be migrated later if a plugin is developed to support this data). +- AVSAP/Assessments + +## Launch Migration Process + +Make sure the ArchivesSpace instance that you are migrating into is up and running, then open up the migration tool. + +![Archon migrator](../../../../images/archon_migrator.jpg) + +1. Change the default information in the migration tool user interface: + - ArchonSource – Supply the base URL for the Archon instance. + - Archon User – Username for an account with full administrator privileges. + - Password – Password for that same account. + - Download Digital Object Files checkbox – Check if you want to move any attached digital object files and supply a webpath to a web accessible folder where you intend to place the digital objects after the migration is complete. + - Set Download Folder – Clicking this will open a file explorer that will allow you to specify the folder to which you want digital files from Archon to be downloaded. + - Set Default Repository checkbox -- Select "Set Default Repository" checkbox to set which Repository Accession and Unlinked digital objects are copied to. The default is "Based on Linked Collection," which will copy Accession records to the same repository of any Collection records they are linked to, or the first repository if they are not. You can also select a specific repository from the drop-down list. + - Host – The URL and port number of the ArchivesSpace backend server. + - ASpace admin – User name for the ArchivesSpace "admin" account. The default value of "admin" should work unless it was changed by the ArchivesSpace administrator. + - Password – Password for the ArchivesSpace "admin" account. The default value of "admin" should work unless it was changed by the ArchivesSpace administrator. + - Reset Password – Each user account transferred has its password reset to this. Please note that users need to change their password when they first log-in unless LDAP is used for authentication. + - Migration Options – This is needed for the functioning of the migration tool. Please do not make changes to this area. + - Output Console – Display section for following the migration while it is running + - View Error Log – Used to view a printout of all the errors encountered during the migration process. This can be used while the migration process is underway as well. +2. Press the "Copy to ArchivesSpace" button to start the migration process. This starts the migration to the ArchivesSpace instance indicated by the Host URL. +3. If the migration process fails: Review the error message provided and /or the migration log. Fix any issues that have been identified, clear the target MySQL and try again. +4. When the process has completed: + - Download the migration report. + - Move digital objects into the folder location corresponding to the URL you provided to the migration tool. + +## Assessing the Migration and Cleaning Up Data + +1. Use the migration report to assess the fidelity of the migration and to determine whether to fix data in the source Archon instance and conduct the migration again, or fix data in the target ArchivesSpace instance. If you select to fix data in Archon, you will need to delete all the content in the ArchivesSpace instance, then rerun the migration after clearing the ArchivesSpace database. +2. Review the following record types, making sure the correct number of records migrated. In conducting the reviews, look for duplicate or incomplete records, broken links, or truncated data. + - Controlled vocabulary lists + - Classifications + - Accessions + - Resources + - Digital objects + - Subjects (not persons, families, and corporate bodies) + - Creators (known as Agents in ArchivesSpace) + - Locations +3. Review closely the set of sample records you selected, comparing data in Archon to data in ArchivesSpace. +4. If you accept the migration in the ArchivesSpace instance, then proceed to re-establish user passwords. While user records will migrate, the passwords associated with them will not. You will need to reassign those passwords according to the policies or conventions of your repositories. + +## Appendix A: Migration Log Review + +The migration log provides a description of any irregularities that take place during a migration and should be saved in a secure location, for future reference. The log contains both save errors and warnings. The warnings should be reviewed after the migration for information, for potential action. + +Most warnings will not require a follow up action. For example, they may note that a supplied value has been provided to meet an ArchivesSpace data model requirement. This occurs for all collections with empty identifiers. Occasionally, warnings will indicate that there was a problem establishing a link between two records for a reason such as a resource component not being found. Warnings like this should be cause for review since they may indicate that some data was lost. + +Save errors will note that a particular piece of data could not be migrated because it is not supported in the ArchivesSpace data model or for some other reason. In these cases, you should review the record in Archon and in ArchivesSpace if it was migrated at all. Oftentimes, these occur due to duplicate records (such as if you have a matching creator and person subject). If a save error occurs due to a duplicate record, this is usually okay but should still be reviewed to make sure there was no data loss. If a save error occurs for any other reason, this typically means the migration will need to be rerun (unless the record it occurred on is not needed or is easier just to migrate manually). + +Typically, the migration log will record the Archon internal IDs of the original Archon object being migrated whenever a save error or warning occurs. This simplifies finding and correcting relevant records. diff --git a/src/content/docs/es/migrations/migration_tools.md b/src/content/docs/es/migrations/migration_tools.md new file mode 100644 index 0000000..523f0e4 --- /dev/null +++ b/src/content/docs/es/migrations/migration_tools.md @@ -0,0 +1,59 @@ +--- +title: Migration tools +description: Links to tools for migrating data into and out of ArchivesSpace. +--- + +## Archivists' Toolkit + +- [AT migration tool instructions](/migrations/migrate_from_archivists_toolkit) +- [AT migration plugin](https://github.com/archivesspace/at-migration/releases) +- [AT migration source code](https://github.com/archivesspace/at-migration) +- [AT migration mapping (for 2.x versions of the tool and ArchivesSpace](https://github.com/archivesspace/at-migration/blob/master/docs/ATMappingDocument.xlsx) + +### Older information + +- [AT migration guidelines (for migrations using the original migration tool through version 1.4.2; only supports migrations to version 1.4.2 or lower of ArchivesSpace)](http://archivesspace.org/wp-content/uploads/2016/08/ATMigrationGuidelines-REV-20140417.pdf) +- [AT migration mapping (for migrations through version 1.4.2 or lower of the tool and ArchivesSpace)](http://archivesspace.org/wp-content/uploads/2016/08/ATMappingDocument_AT-ASPACE_BETA.xls) + +## Archon + +- [Archon migration tool instructions](/migrations/migrate_from_archon) +- [Archon migration tool](https://github.com/archivesspace/archon-migration/releases/latest) +- [Archon migration source code](https://github.com/archivesspace/archon-migration/) +- [Archon migration mapping (for 2.x versions of the tool and ArchivesSpace)](https://docs.google.com/spreadsheets/d/13soN5djk16QYmRoSajtyAc_nBrNldyL58ViahKFJAog/edit?usp=sharing) + +### Older information + +- [refactored Archon migration plugin](https://github.com/archivesspace-deprecated/ArchonMigrator/releases) +- [information about refactoring project](https://archivesspace.atlassian.net/browse/AR-1278) +- [previous Archon migration plugin](https://github.com/archivesspace/archon-migration/releases) +- [Plugin read me text](https://github.com/archivesspace-deprecated/ArchonMigrator/blob/master/README.md) +- [Archon migration guidelines](http://archivesspace.org/wp-content/uploads/2016/05/Archon_Migration_Guidelines-7_13_2017.docx) +- [Archon migration mapping](http://archivesspace.org/wp-content/uploads/2016/08/ArchonSchemaMappingsPublic.xlsx) + +## Data Import and Export Maps + +- [Accession CSV Map](http://archivesspace.org/wp-content/uploads/2016/05/Accession-CSV-mapping-2013-08-05.xlsx) +- [Accession CSV Template](https://github.com/archivesspace/archivesspace/tree/master/templates) +- [Archival Objects from Excel or CSV with Load Via Spreadsheet](https://github.com/archivesspace/archivesspace/tree/master/templates) +- [Assessment CSV Template](https://github.com/archivesspace/archivesspace/tree/master/templates) +- [Digital Object CSV Map](http://archivesspace.org/wp-content/uploads/2016/08/DigitalObject-CSV-mapping-2013-02-26.xlsx) +- [Digital Object CSV Template](https://github.com/archivesspace/archivesspace/tree/master/templates) +- [Digital Objects Export Maps](http://archivesspace.org/wp-content/uploads/2016/08/ASpace-Dig-Object-Exports.xlsx) +- [EAD Import / Export Map](https://archivesspace.org/wp-content/uploads/2021/06/EAD-Import-Export-Mapping-20171030.xlsx) +- [Location Record CSV Template](https://github.com/archivesspace/archivesspace/tree/master/templates) +- (newly reviewed) [MARCXML Import Map](https://archivesspace.org/wp-content/uploads/2021/06/AS-MARC-import-mappings-2021-06-15.xlsx) +- [MARCXML Export Map](https://archivesspace.org/wp-content/uploads/2021/06/MARCXML-Export-Mapping-20130715.xlsx) +- [MARCXML Authority Import / Export Map](https://archivesspace.org/wp-content/uploads/2021/05/Agents-ASpace-to-MARCXMLMay2021.xlsx) +- [EAC-CPF Import / Export Map](https://archivesspace.org/wp-content/uploads/2021/05/Agents-ASpace-to-EAC-CPFMay2021.xlsx) + +(newly reviewed) MARCXML Import Map +MARCXML Export Map + +### OAI-PMH-only maps + +Most ArchivesSpace OAI-PMH responses are based on the export maps above, but there are a few that are only available through OAI-PMH + +[MODS for resources and resource components](https://archivesspace.org/wp-content/uploads/2019/06/MODS-OAI-Export-Mapping-20190610.xlsx) +[Dublin Core for resources and resource components](https://archivesspace.org/wp-content/uploads/2019/06/DC-OAI-Export-Mapping-20190610.xlsx) +[DCMI Metadata Terms for resources and resource components](https://archivesspace.org/wp-content/uploads/2019/06/DCTerms-OAI-Export-Mapping-20190611.xlsx) diff --git a/src/content/docs/es/provisioning/clustering.md b/src/content/docs/es/provisioning/clustering.md new file mode 100644 index 0000000..db73b24 --- /dev/null +++ b/src/content/docs/es/provisioning/clustering.md @@ -0,0 +1,370 @@ +--- +title: Load balancing and multiple tenants +description: Guidelines for running ArchivesSpace in a clustered environment for load-balancing purposes, and for supporting multiple tenants. +--- + +This document describes two aspects of running ArchivesSpace in a +clustered environment: for load-balancing purposes, and for supporting +multiple tenants (isolated installations of the system in a common +deployment environment). + +The configuration described in this document is one possible approach, +but it is not intended to be prescriptive: the application layer of +ArchivesSpace is stateless, so any mechanism you prefer for load +balancing across web applications should work just as well as the one +described here. + +Unless otherwise stated, it is assumed that you have root access on +your machines, and all commands are to be run as root (or with sudo). + +## Architecture overview + +This document assumes an architecture with the following components: + +- A load balancer machine running the Nginx web server +- Two application servers, each running a full ArchivesSpace + application stack +- A MySQL server +- A shared NFS volume mounted under `/aspace` on each machine + +## Overview of files + +The `files` directory in this repository (in the same directory as this +`README.md`) contains what will become the contents of the `/aspace` +directory, shared by all servers. It has the following layout: + + /aspace + ├── archivesspace + │   ├── config + │   │   ├── config.rb + │   │   └── tenant.rb + │   ├── software + │   └── tenants + │   └── \_template + │   └── archivesspace + │   ├── config + │   │   ├── config.rb + │   │   └── instance_hostname.rb.example + │   └── init_tenant.sh + └── nginx + └── conf + ├── common + │   └── server.conf + └── tenants + └── \_template.conf.example + +The highlights: + +- `/aspace/archivesspace/config/config.rb` -- A global configuration file for all ArchivesSpace instances. Any configuration options added to this file will be applied to all tenants on all machines. +- `/aspace/archivesspace/software/` -- This directory will hold the master copies of the `archivesspace.zip` distribution. Each tenant will reference one of the versions of the ArchivesSpace software in this directory. +- `/aspace/archivesspace/tenants/` -- Each tenant will have a sub-directory under here, based on the `_template` directory provided. This holds the configuration files for each tenant. +- `/aspace/archivesspace/tenants/[tenant name]/config/config.rb` -- The global configuration file for [tenant name]. This contains tenant-specific options that should apply to all of the tenant's ArchivesSpace instances (such as their database connection settings). +- `/aspace/archivesspace/tenants/[tenant name]/config/instance_[hostname].rb` -- The configuration file for a tenant's ArchivesSpace instance running on a particular machine. This allows configuration options to be set on a per-machine basis (for example, setting different ports for different application servers) +- `/aspace/nginx/conf/common/server.conf` -- Global Nginx configuration settings (applying to all tenants) +- `/aspace/nginx/conf/tenants/[tenant name].conf` -- A tenant-specific Nginx configuration file. Used to set the URLs of each tenant's ArchivesSpace instances. + +## Getting started + +We'll assume you already have the following ready to go: + +- Three newly installed machines, each running RedHat (or CentOS) + Linux (we'll refer to these as `loadbalancer`, `apps1` and + `apps2`). +- A MySQL server. +- An NFS volume that has been mounted as `/aspace` on each machine. + All machines should have full read/write access to this area. +- An area under `/aspace.local` which will store instance-specific + files (such as log files and Solr indexes). Ideally this is just + a directory on local disk. +- Java 1.6 (or above) installed on each machine. + +### Populate your /aspace/ directory + +Start by copying the directory structure from `files/` into your +`/aspace` volume. This will contain all of the configuration files +shared between servers: + +```shell +mkdir /var/tmp/aspace/ +cd /var/tmp/aspace/ +unzip -x /path/to/archivesspace.zip +cp -av archivesspace/clustering/files/* /aspace/ +``` + +You can do this on any machine that has access to the shared +`/aspace/` volume. + +### Install the cluster init script + +On your application servers (`apps1` and `apps2`) you will need to +install the supplied init script: + +```shell +cp -a /aspace/aspace-cluster.init /etc/init.d/aspace-cluster +chkconfig --add aspace-cluster +``` + +This will start all configured instances when the system boots up, and +can also be used to start/stop individual instances. + +### Install and configure Nginx + +You will need to install Nginx on your `loadbalancer` machine, which +you can do by following the directions at +http://nginx.org/en/download.html. Using the pre-built packages for +your platform is fine. At the time of writing, the process for CentOS +is simply: + +```shell +wget http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm +rpm -i nginx-release-centos-6-0.el6.ngx.noarch.rpm +yum install nginx +``` + +Nginx will place its configuration files under `/etc/nginx/`. For +now, the only change we need to make is to configure Nginx to load our +tenants' configuration files. To do this, edit +`/etc/nginx/conf.d/default.conf` and add the line: + +``` +include /aspace/nginx/conf/tenants/\*.conf; +``` + +_Note:_ the location of Nginx's main config file might vary between +systems. Another likely candidate is `/etc/nginx/nginx.conf`. + +### Download the ArchivesSpace distribution + +Rather than having every tenant maintain their own copy of the +ArchivesSpace software, we put a shared copy under +`/aspace/archivesspace/software/` and have each tenant instance refer +to that copy. To set this up, run the following commands on any one +of the servers: + +```shell +cd /aspace/archivesspace/software/ +unzip -x /path/to/downloaded/archivesspace-x.y.z.zip +mv archivesspace archivesspace-x.y.z +ln -s archivesspace-x.y.z stable +``` + +Note that we unpack the distribution into a directory containing its +version number, and then assign that version the symbolic name +"stable". This gives us a convenient way of referring to particular +versions of the software, and we'll use this later on when setting up +our tenant. + +We'll be using MySQL, which means we must make the MySQL connector +library available. To do this, place it in the `lib/` directory of +the ArchivesSpace package: + +```shell +cd /aspace/archivesspace/software/stable/lib +wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.24/mysql-connector-java-5.1.24.jar +``` + +## Defining a new tenant + +With our server setup out of the way, we're ready to define our first +tenant. As shown in _Overview of files_ above, each tenant has their +own directory under `/aspace/archivesspace/tenants/` that holds all of +their configuration files. In defining our new tenant, we will: + +- Create a Unix account for the tenant +- Create a database for the tenant +- Create a new set of ArchivesSpace configuration files for the + tenant +- Set up the database + +Our newly defined tenant won't initially have any ArchivesSpace +instances, but we'll set those up afterwards. + +To complete the remainder of this process, there are a few bits of +information you will need. In particular, you will need to know: + +- The identifier you will use for the tenant you will be creating. + In this example we use `exampletenant`. +- Which port numbers you will use for the application's backend, + Solr instance, staff and public interfaces. These must be free on + your application servers. +- If running each tenant under a separate Unix account, the UID and + GID you'll use for them (which must be free on each of your + servers). +- The public-facing URLs for the new tenant. We'll use + `staff.example.com` for the staff interface, and `public.example.com` + for the public interface. + +### Creating a Unix account + +Although not strictly required, for security and ease of system +monitoring it's a good idea to have each tenant instance running under +a dedicated Unix account. + +We will call our new tenant `exampletenant`, so let's create a user +and group for them now. You will need to run these commands on _both_ +application servers (`apps1` and `apps2`): + +```shell +groupadd --gid 2000 exampletenant +useradd --uid 2000 --gid 2000 exampletenant +``` + +Note that we specify a UID and GID explicitly to ensure they match +across machines. + +### Creating the database + +ArchivesSpace assumes that each tenant will have their own MySQL +database. You can create this from the MySQL shell: + +```sql +create database exampletenant default character set utf8; +grant all on exampletenant.* to 'example'@'%' identified by 'example123'; +``` + +In this example, we have a MySQL database called `exampletenant`, and +we grant full access to the user `example` with password `example123`. +Assuming our database server is `db.example.com`, this corresponds to +the database URL: + +``` +jdbc:mysql://db.example.com:3306/exampletenant?user=example&password=example123&useUnicode=true&characterEncoding=UTF-8 +``` + +We'll make use of this URL in the following section. + +### Creating the tenant configuration + +Each tenant has their own set of files under the +`/aspace/archivesspace/tenants/` directory. We'll define our new +tenant (called `exampletenant`) by copying the template set of +configurations and running the `init_tenant.sh` script to set them +up. We can do this on either `apps1` or `apps2`--it only needs to be +done once: + +```shell +cd /aspace/archivesspace/tenants +cp -a \_template exampletenant +``` + +Note that we've named the tenant `exampletenant` to match the Unix +account it will run as. Later on, the startup script will use this +fact to run each instance as the correct user. + +For now, we'll just edit the configuration file for this tenant, under +`exampletenant/archivesspace/config/config.rb`. When you open this file you'll see two +placeholders that need filling in: one for your database URL, which in +our case is just: + +``` +jdbc:mysql://db.example.com:3306/exampletenant?user=example&password=example123&useUnicode=true&characterEncoding=UTF-8 +``` + +and the other for this tenant's search, staff and public user secrets, +which should be random, hard to guess passwords. + +## Adding the tenant instances + +To add our tenant instances, we just need to initialize them on each +of our servers. On `apps1` _and_ `apps2`, we run: + +```shell +cd /aspace/archivesspace/tenants/exampletenant/archivesspace +./init_tenant.sh stable +``` + +If you list the directory now, you will see that the `init_tenant.sh` +script has created a number of symlinks. Most of these refer back to +the `stable` version of the ArchivesSpace software we unpacked +previously, and some contain references to the `data` and `logs` +directories stored under `/aspace.local`. + +Each server has its own configuration file that tells the +ArchivesSpace application which ports to listen on. To set this up, +make two copies of the example configuration by running the following +command on `apps1` then `apps2`: + +```shell +cd /aspace/archivesspace/tenants/exampletenant/archivesspace +cp config/instance_hostname.rb.example config/instance_`hostname`.rb +``` + +Then edit each file to set the URLs that the instance will use. +Here's our `config/instance_apps1.example.com.rb`: + +```ruby +{ + :backend_url => "http://apps1.example.com:8089", + :frontend_url => "http://apps1.example.com:8080", + :solr_url => "http://apps1.example.com:8090", + :indexer_url => "http://apps1.example.com:8091", + :public_url => "http://apps1.example.com:8081", +} +``` + +Note that the filename is important here: it must be: + +``` +instance_[server hostname].rb +``` + +These URLs will determine which ports the application listens on when +it starts up, and are also used by the ArchivesSpace indexing system +to track updates across the cluster. + +### Starting up + +As a one-off, we need to populate this tenant's database with the +default set of tables. You can do this by running the +`setup-database.sh` script on either `apps1` or `apps2`: + +```shell +cd /aspace/archivesspace/tenants/exampletenant/archivesspace +scripts/setup-database.sh +``` + +With the two instances configured, you can now use the init script to +start them up on each server: + +```shell +/etc/init.d/aspace-cluster start-tenant exampletenant +``` + +and you can monitor each instance's log file under +`/aspace.local/tenants/exampletenant/logs/`. Once they're started, +you should be able to connect to each instance with your web browser +at the configured URLs. + +## Configuring the load balancer + +Our final step is configuring Nginx to accept requests for our staff +and public interfaces and forward them to the appropriate application +instance. Working on the `loadbalancer` machine, we create a new +configuration file for our tenant: + +```shell +cd /aspace/nginx/conf/tenants +cp -a \_template.conf.example exampletenant.conf +``` + +Now open `/aspace/nginx/conf/tenants/exampletenant.conf` in an +editor. You will need to: + +- Replace `<tenantname>` with `exampletenant` where it appears. +- Change the `server` URLs to match the hostnames and ports you + configured each instance with. +- Insert the tenant's hostnames for each `server_name` entry. In + our case these are `public.example.com` for the public interface, and + `staff.example.com` for the staff interface. + +Once you've saved your configuration, you can test it with: + + /usr/sbin/nginx -t + +If Nginx reports that all is well, reload the configurations with: + + /usr/sbin/nginx -s reload + +And, finally, browse to `http://public.example.com/` to verify that Nginx +is now accepting requests and forwarding them to your app servers. +We're done! diff --git a/src/content/docs/es/provisioning/domains.md b/src/content/docs/es/provisioning/domains.md new file mode 100644 index 0000000..9fa0d3e --- /dev/null +++ b/src/content/docs/es/provisioning/domains.md @@ -0,0 +1,85 @@ +--- +title: Serving over subdomains +description: How to configure ArchivesSpace and your web server to serve the application over subdomains. +--- + +This document describes how to configure ArchivesSpace and your web server to serve the application over subdomains (e.g., `http://staff.myarchive.org/` and `http://public.myarchive.org/`), which is the recommended +practice. Separate documentation is available if you wish to [serve ArchivesSpace under a prefix](/provisioning/prefix) (e.g., `http://aspace.myarchive.org/staff` and +`http://aspace.myarchive.org/public`). + +1. [Configuring Your Firewall](#Step-1%3A-Configuring-Your-Firewall) +2. [Configuring Your Web Server](#Step-2%3A-Configuring-Your-Web-Server) + - [Apache](#Apache) + - [Nginx](#Nginx) +3. [Configuring ArchivesSpace](#Step-3%3A-Configuring-ArchivesSpace) + +## Step 1: Configuring Your Firewall + +Since using subdomains negates the need for users to access the application directly on ports 8080 and 8081, these should be locked down to access by localhost only. On a Linux server, this can be done using iptables: + +```shell +iptables -A INPUT -p tcp -s localhost --dport 8080 -j ACCEPT +iptables -A INPUT -p tcp --dport 8080 -j DROP +iptables -A INPUT -p tcp -s localhost --dport 8081 -j ACCEPT +iptables -A INPUT -p tcp --dport 8081 -j DROP +``` + +## Step 2: Configuring Your Web Server + +### Apache + +The [mod_proxy module](https://httpd.apache.org/docs/2.4/mod/mod_proxy.html) is necessary for Apache to route public web traffic to ArchivesSpace's ports as designated in your config.rb file (ports 8080 and 8081 by default). + +This can be set up as a reverse proxy in the Apache configuration like so: + +```apache +<VirtualHost *:80> +ServerName public.myarchive.org +ProxyPass / http://localhost:8081/ +ProxyPassReverse / http://localhost:8081/ +</VirtualHost> + +<VirtualHost *:80> +ServerName staff.myarchive.org +ProxyPass / http://localhost:8080/ +ProxyPassReverse / http://localhost:8080/ +</VirtualHost> +``` + +The purpose of ProxyPass is to route _incoming_ traffic on the public URL (public.myarchive.org) to port 8081 of your server, where ArchivesSpace's public interface sits. The purpose of ProxyPassReverse is to intercept _outgoing_ traffic and rewrite the header to match the URL that the browser is expecting to see (public.myarchive.org). + +### nginx + +Using nginx as a reverse proxy needs a configuration file like so: + +```nginx +server { +listen 80; +listen [::]:80; +server_name staff.myarchive.org; +location / { + proxy_pass http://localhost:8080/; + } +} + server { +listen 80; +listen [::]:80; +server_name public.myarchive.org; +location / { + proxy_pass http://localhost:8081/; + } +} +``` + +## Step 3: Configuring ArchivesSpace + +The only configuration within ArchivesSpace that needs to occur is adding your domain names to the following lines in config.rb: + +```ruby +AppConfig[:frontend_proxy_url] = 'http://staff.myarchive.org' +AppConfig[:public_proxy_url] = 'http://public.myarchive.org' +``` + +This configuration allows the staff edit links to appear on the public site to users logged in to the staff interface. + +Do **not** change `AppConfig[:public_url]` or `AppConfig[:frontend_url]`; these must retain their port numbers in order for the application to run. diff --git a/src/content/docs/es/provisioning/https.md b/src/content/docs/es/provisioning/https.md new file mode 100644 index 0000000..b02732c --- /dev/null +++ b/src/content/docs/es/provisioning/https.md @@ -0,0 +1,163 @@ +--- +title: Serving over HTTPS +description: Installing ArchivesSpace in such a manner that all end-user requests are served over HTTPS. +--- + +This document describes the approach for those wishing to install +ArchivesSpace in such a manner that all end-user requests (i.e., URLs) +are served over HTTPS rather than HTTP. For the purposes of this documentation, the URLs for the staff and public interfaces will be: + +- `https://staff.myarchive.org` - staff interface +- `https://public.myarchive.org` - public interface + +The configuration described in this document is one possible approach, +and to keep things simple the following are assumed: + +- ArchivesSpace is running on a single Linux server +- The server is running Apache or Nginx +- You have obtained an SSL certificate and key from an authority +- You have ensured that appropriate firewall ports have been opened (80 and 443). + +1. [Configuring the Web Server](<#Step-1%3A-Configure-Web-Server-(Apache-or-Nginx)>) + - [Apache](#Apache) + - [Setting up SSL](#Setting-up-SSL) + - [Setting up Redirects](#Setting-up-Redirects) + - [Nginx](#Nginx) +2. [Configuring ArchivesSpace](#Step-2%3A-Configure-ArchivesSpace) + +## Step 1: Configure Web Server (Apache or Nginx) + +### Apache + +Information about configuring Apache for SSL can be found at http://httpd.apache.org/docs/current/ssl/ssl_howto.html. You should read +that documentation before attempting to configure SSL. + +#### Setting up SSL + +Use the `NameVirtualHost` and `VirtualHost` directives to proxy +requests to the actual application urls. This requires the use of the `mod_proxy` module in Apache. + +```apache +NameVirtualHost *:443 + +<VirtualHost *:443> + ServerName staff.myarchive.org + SSLEngine On + SSLCertificateFile "/path/to/your/cert.crt" + SSLCertificateKeyFile "/path/to/your/key.key" + RequestHeader set X-Forwarded-Proto "https" + ProxyPreserveHost On + ProxyPass / http://localhost:8080/ + ProxyPassReverse / http://localhost:8080/ +</VirtualHost> + +<VirtualHost *:443> + ServerName public.myarchive.org + SSLEngine On + SSLCertificateFile "/path/to/your/cert.crt" + SSLCertificateKeyFile "/path/to/your/key.key" + RequestHeader set X-Forwarded-Proto "https" + ProxyPreserveHost On + ProxyPass / http://localhost:8081/ + ProxyPassReverse / http://localhost:8081/ +</VirtualHost> +``` + +You may optionally set the `Set-Cookie: Secure attribute` by adding `Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure`. When a cookie has the Secure attribute, the user agent will include the cookie in an HTTP request only if the request is transmitted over a secure channel. + +Users may encounter a warning in the browser's console stating `Cookie “archivesspace_session” does not have a proper “SameSite” attribute value. Soon, cookies without the “SameSite” attribute or with an invalid value will be treated as “Lax”. This means that the cookie will no longer be sent in third-party contexts` (example from Firefox 104) or something similar. Some browsers (for example, Chrome version 80 or above) already enforce this. Standard ArchivesSpace installations should be unaffected, but if you encounter problems with integrations and/or customizations of your particular installation, the following directive may be required: `Header edit Set-Cookie ^(.*)$ $1;SameSite=None;Secure`. Alternatively, it may be the case that `SameSite=Lax` (the default) or even `SameSite=Strict` are more appropriate depending on your functional and/or security requirements. Please refer to https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie/SameSite or other resources for more information. + +#### Setting up Redirects + +When running a site over HTTPS, it's a good idea to set up a redirect to ensure any outdated HTTP requests are routed to the correct URL. This can be done through Apache as follows: + +```apache +<VirtualHost *:80> +ServerName staff.myarchive.org +RewriteEngine On +RewriteCond %{HTTPS} off +RewriteRule (.*) https://staff.myarchive.org$1 [R,L] +</VirtualHost> + +<VirtualHost *:80> +ServerName public.myarchive.org +RewriteEngine On +RewriteCond %{HTTPS} off +RewriteRule (.*) https://public.myarchive.org$1 [R,L] +</VirtualHost> +``` + +### Nginx + +Information about configuring nginx for SSL can be found at http://nginx.org/en/docs/http/configuring_https_servers.html You should read +that documentation before attempting to configure SSL. + +```nginx + +server { + listen 80; + listen [::]:80; + server_name staff.myarchive.org; + return 301 https://staff.myarchive.org; +} + + +server { + listen 443 ssl; + server_name staff.myarchive.org; + charset utf-8; + } + + ssl_certificate /path/to/your/fullchain.pem; + ssl_certificate_key /path/to/your/key.pem + + location / { + allow 0.0.0.0/0; + deny all; + proxy_pass http://localhost:8081; + } +} + +server { + listen 80; + listen [::]:80; + server_name public.myarchive.org; + return 301 https://public.myarchive.org; +} + + +server { + listen 443 ssl; + server_name staff.myarchive.org; + charset utf-8; + } + + ssl_certificate /path/to/your/fullchain.pem; + ssl_certificate_key /path/to/your/key.pem + + location / { + allow 0.0.0.0/0; + deny all; + proxy_pass http://localhost:8080; + } +} + +``` + +## Step 2: Configure ArchivesSpace + +The following lines need to be altered in the config.rb file: + +```ruby +AppConfig[:frontend_proxy_url] = "https://staff.myarchive.org" +AppConfig[:public_proxy_url] = "https://public.myarchive.org" +``` + +These lines don't need to be altered and should remain with their default values. E.g.: + +```ruby +AppConfig[:frontend_url] = "http://localhost:8080" +AppConfig[:public_url] = "http://localhost:8081" +AppConfig[:frontend_proxy_prefix] = proc { "#{URI(AppConfig[:frontend_proxy_url]).path}/".gsub(%r{/+$}, "/") } +AppConfig[:public_proxy_prefix] = proc { "#{URI(AppConfig[:public_proxy_url]).path}/".gsub(%r{/+$}, "/") } +``` diff --git a/src/content/docs/es/provisioning/index.md b/src/content/docs/es/provisioning/index.md new file mode 100644 index 0000000..95ea9e7 --- /dev/null +++ b/src/content/docs/es/provisioning/index.md @@ -0,0 +1,15 @@ +--- +title: Provisioning and server configuration +description: The index to the provisioning section of the ArchivesSpace techinal documentation. +--- + +- [Running ArchivesSpace with load balancing and multiple tenants](./clustering.html) +- [Serving ArchivesSpace over subdomains](./domains.html) +- [Serving ArchivesSpace user-facing applications over HTTPS](./https.html) +- [JMeter Test Group Template](./jmeter.html) +- [Running ArchivesSpace against MySQL](./mysql.html) +- [Application monitoring with New Relic](./newrelic.html) +- [Running ArchivesSpace under a prefix](./prefix.html) +- [robots.txt](./robots.html) +- [Running ArchivesSpace with external Solr](./solr.html) +- [Tuning ArchivesSpace](./tuning.html) diff --git a/src/content/docs/es/provisioning/jmeter.md b/src/content/docs/es/provisioning/jmeter.md new file mode 100644 index 0000000..0373a4d --- /dev/null +++ b/src/content/docs/es/provisioning/jmeter.md @@ -0,0 +1,13 @@ +--- +title: JMeter Test Group Template +description: How to create a Jmeter Test Group. +--- + +## Creating a test group: + +Load the file 'example_test_plan.jmx' into JMeter and make sure the following are true for the example to run successfully: + +- The backend is running on localhost port 4567 +- There is at least one repository, and its url is /repositories/2 + +The example will log in to the backend, store the session key as a JMeter variable, and make two basic requests, one of which will require a session key. diff --git a/src/content/docs/es/provisioning/mysql.md b/src/content/docs/es/provisioning/mysql.md new file mode 100644 index 0000000..8ba110a --- /dev/null +++ b/src/content/docs/es/provisioning/mysql.md @@ -0,0 +1,89 @@ +--- +title: Using MySQL +description: Instructions for how to set up MySQL with ArchivesSpace. +--- + +Out of the box, the ArchivesSpace distribution runs against an +embedded database, but this is only suitable for demonstration +purposes. When you are ready to starting using ArchivesSpace with +real users and data, you should switch to using MySQL. MySQL offers +significantly better performance when multiple people are using the +system, and will ensure that your data is kept safe. + +ArchivesSpace is currently able to run on MySQL version 5.x & 8.x. + +## Download MySQL Connector + +ArchivesSpace requires the +[MySQL Connector for Java](http://dev.mysql.com/downloads/connector/j/), +which must be downloaded separately because of its licensing agreement. +Download the Connector and place it in a location where ArchivesSpace can +find it on its classpath: + +```shell +$ cd lib +$ curl -Oq https://repo1.maven.org/maven2/com/mysql/mysql-connector-j/9.1.0/mysql-connector-j-9.1.0.jar +``` + +Note that the version of the MySQL connector may be different by the +time you read this. + +## Set up your MySQL database + +Next, create an empty database in MySQL and grant access to a dedicated +ArchivesSpace user. The following example uses username `as` +and password `as123`. + +**NOTE: WHEN CREATING THE DATABASE, YOU MUST SET THE DEFAULT CHARACTER +ENCODING FOR THE DATABASE TO BE `utf8`.** This is particularly important +if you use a MySQL client to create the database (e.g. Navicat, MySQL +Workbench, phpMyAdmin, etc.). + +<!-- This is also true of MySQL 8 in general... --> + +**NOTE: If using AWS RDS MySQL databases, binary logging is not enabled by default and updates will fail.** To enable binary logging, you must create a custom db parameter group for the database and set the `log_bin_trust_function_creators = 1`. See [Working with DB Parameter Groups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html) for information about RDS parameter groups. Within a MySQL session you can also try `SET GLOBAL log_bin_trust_function_creators = 1;` + +```shell +$ mysql -uroot -p + +mysql> create database archivesspace default character set utf8mb4; +Query OK, 1 row affected (0.08 sec) +``` + +If using MySQL 5.7 and below: + +```sql +mysql> grant all on archivesspace.* to 'as'@'localhost' identified by 'as123'; +Query OK, 0 rows affected (0.21 sec) +``` + +If using MySQL 8+: + +```sql +mysql> create user 'as'@'localhost' identified by 'as123'; +Query OK, 0 rows affected (0.08 sec) + +mysql> grant all privileges on archivesspace.* to 'as'@'localhost'; +Query OK, 0 rows affected (0.21 sec) +``` + +Then, modify your `config/config.rb` file to refer to your MySQL +database. When you modify your configuration file, **MAKE SURE THAT YOU +SPECIFY THAT THE CHARACTER ENCODING FOR THE DATABASE TO BE `UTF-8`** as shown +below: + +```ruby +AppConfig[:db_url] = "jdbc:mysql://localhost:3306/archivesspace?user=as&password=as123&useUnicode=true&characterEncoding=UTF-8" +``` + +There is a database setup script that will create all the tables that +ArchivesSpace requires. Run this with: + +```shell +scripts/setup-database.sh # or setup-database.bat under Windows +``` + +You can now follow the instructions in the "Getting Started" section to start +your ArchivesSpace application. + +\*\*NOTE: For MySQL 8. MySQL 8 uses a new method (caching_sha2_password) as the default authentication plugin instead of the old mysql_native_password that MySQL 5.7 and older used. This may require starting a MySQL 8 server with the `--default-authentication-plugin=mysql_native_password` option. You may also be able to change the auth mechanism on a per user basis by logging into mysql and running `ALTER USER 'as'@'localhost' IDENTIFIED WITH mysql_native_password BY 'as123';`. Also be sure to have the LATEST [MySQL Connector for Java](http://dev.mysql.com/downloads/connector/j/) from MySQL in your /lib/ directory for ArchivesSpace. diff --git a/src/content/docs/es/provisioning/newrelic.md b/src/content/docs/es/provisioning/newrelic.md new file mode 100644 index 0000000..49ff283 --- /dev/null +++ b/src/content/docs/es/provisioning/newrelic.md @@ -0,0 +1,40 @@ +--- +title: Application monitoring with New Relic +description: Instructions for how to set up New Relic for application monitoring on ArchivesSpace. +--- + +[New Relic](http://newrelic.com/) is an application performance monitoring tool (amongst other things). + +**To use with ArchivesSpace you must:** + +- Signup for an account at newrelic (there is a free tier and paid plans) +- Edit config.rb to: + - activate the `newrelic` plugin + - add the New Relic license key + - add an application name to identify the ArchivesSpace instance in the New Relic dashboard + +For example, in config.rb: + +```ruby +## You may have other plugins +AppConfig[:plugins] = ['local', 'newrelic'] + +AppConfig[:newrelic_key] = "enteryourkeyhere" +AppConfig[:newrelic_app_name] = "ArchivesSpace" +``` + +- Install the New Relic agent library by initializing the plugin: + +```shell +## For Linux/OSX +$ scripts/initialize-plugin.sh newrelic + +## For Windows +% scripts\initialize-plugin.bat newrelic +``` + +- Start, or restart ArchivesSpace to pick up the configuration. + +Within a few minutes the application should be visible in the New Relic dashboard with data being collected. + +--- diff --git a/src/content/docs/es/provisioning/prefix.md b/src/content/docs/es/provisioning/prefix.md new file mode 100644 index 0000000..d0ddc38 --- /dev/null +++ b/src/content/docs/es/provisioning/prefix.md @@ -0,0 +1,64 @@ +--- +title: Proxy prefix +description: Instructions for serving each user-facing ArchivesSpace application under a prefix rather than as its own subdomain. +--- + +**Important Note: Prefixes do NOT work properly in versions between 2.0.1 and 2.2.2** + +This document describes a simple approach for those wishing to deviate from the recommended +practice of running each user-facing ArchivesSpace application on its own subdomain, and instead +serve each application under a prefix, e.g. + +``` +http://aspace.myarchive.org/staff +http://aspace.myarchive.org/public +``` + +This configuration described in this document is one possible approach, +and to keep things simple the following are assumed: + +- ArchivesSpace is running on a single Linux server +- The server is running the Apache 2.2+ webserver + +Unless otherwise stated, it is assumed that you have root access on +your machines, and all commands are to be run as root (or with sudo). + +## Step 1: Setup proxies in your Apache configuration + +The following edits can be made in the httpd.conf file itself, or in an included file: + +```apache +ProxyPass /staff http://localhost:8080/staff +ProxyPassReverse /staff http://localhost:8080/ +ProxyPass /public http://localhost:8081/public +ProxyPassReverse /public http://localhost:8081/ +``` + +Now restart Apache. + +## Step 2: Install and configure ArchivesSpace + +Follow the instructions in the main README to download and install ArchivesSpace. + +Open the file `archivesspace/config/config.rb` and add the following lines: + +```ruby +AppConfig[:frontend_proxy_url] = 'http://aspace.myarchive.org/staff' +AppConfig[:public_proxy_url] = 'http://aspace.myarchive.org/public' +``` + +(Note: These lines should NOT begin with a '#' character.) + +Start ArchivesSpace. + +## Step 3: (Optional) Lock down ports 8080 and 8081 + +By default, the staff and public applications are accessible on ports 8080 and 8081 + +``` +http://aspace.myarchive.org:8080 +http://aspace.myarchive.org:8081 +``` + +Since these are not the URLs at which users should access the application, you will probably +want to close them off. See README_HTTPS for more information on closing ports using iptables. diff --git a/src/content/docs/es/provisioning/robots.md b/src/content/docs/es/provisioning/robots.md new file mode 100644 index 0000000..702522a --- /dev/null +++ b/src/content/docs/es/provisioning/robots.md @@ -0,0 +1,45 @@ +--- +title: robots.txt +description: Instructions for adding a robots.txt to your ArchivesSpace site. +--- + +The easiest way to add a `robots.txt` to your site is simply create +one in your `/config/` directly. This file will be served as a standard +`robots.txt` file when you start your site. + +If you're not able to do that, you can use a seperate file and your proxy. + +For Apache the config would look like this: + +```apache +<Location "/robots.txt"> + SetHandler None + Require all granted +</Location> +Alias /robots.txt /var/www/robots.txt +``` + +nginx, more like this: + +```nginx +location /robots.txt { + alias /var/www/robots.txt; +} +``` + +You may also add robots meta-tags to your `layout_head.html.erb` to be included in the header area of your site. + +example: + +`<meta name="robots" content="noindex, nofollow">` + +A sensible starting point for a `robots.txt` file looks something like this: + +``` +Disallow: /search* +Disallow: /inventory/* +Disallow: /collection_organization/* +Disallow: /repositories/*/top_containers/* +Disallow: /check_session* +Disallow: /repositories/*/resources/*/tree/* +``` diff --git a/src/content/docs/es/provisioning/solr.md b/src/content/docs/es/provisioning/solr.md new file mode 100644 index 0000000..84845d0 --- /dev/null +++ b/src/content/docs/es/provisioning/solr.md @@ -0,0 +1,205 @@ +--- +title: External Solr +description: Instructions for installing and using external Solr with ArchivesSpace. +--- + +:::note +For ArchivesSpace > 3.1.1, external Solr is **required**. For previous versions it is optional. +::: + +## Supported Solr Versions + +See the [Solr requirement notes](/administration/getting_started#solr) + +## Install Solr + +Refer to the [Solr documentation](https://solr.apache.org/guide/solr/latest/) for instructions on setting up Solr on your server. + +You will download the Solr package and extract it to a folder of your choosing. Do not start Solr +until you have added the ArchivesSpace configuration files. + +**We strongly recommend a standalone mode installation. No support will be provided for Solr +Cloud deployments specifically (i.e. we cannot help troubleshoot Zookeeper).** + +## Create a configset + +Before running Solr you will need to +setup a [configset](https://solr.apache.org/guide/8_10/config-sets.html#configsets-in-standalone-mode). + +### Create a new directory + +#### Linux + +Using the command line: + +```shell +mkdir -p /$path/$to/$solr/server/solr/configsets/archivesspace/conf +``` + +Be sure to replace `/$path/$to/$solr` with your actual Solr location, which might be something like: + +```shell +mkdir -p /opt/solr/server/solr/configsets/archivesspace/conf +``` + +#### Windows + +Right click on your Solr directory and open in Windows Terminal (Powershell). + +``` +mkdir -p .\server\solr\configsets\archivesspace\conf +``` + +You should see something like this in response: + +``` +Directory: C:\Users\archivesspace\Projects\solr-8.10.1\server\solr\configsets\archivesspace +Mode LastWriteTime Length Name +---- ------------- ------ ---- +d----- 10/25/2021 12:15 PM conf +``` + +### Copy the config files + +Copy the ArchivesSpace Solr configuration files from the `solr` directory included +in the zip file release into the `$SOLR_HOME/server/solr/configsets/archivesspace/conf` directory. + +There should be four files: + +- schema.xml +- solrconfig.xml +- stopwords.txt +- synonyms.txt + +```shell +ls .\server\solr\configsets\archivesspace\conf\ + +Directory: C:\Users\archivesspace\Projects\solr-8.10.1\server\solr\configsets\archivesspace\conf + +Mode LastWriteTime Length Name +---- ------------- ------ ---- +-a---- 10/25/2021 12:18 PM 18291 schema.xml +-a---- 10/25/2021 12:18 PM 3046 solrconfig.xml +-a---- 10/25/2021 12:18 PM 0 stopwords.txt +-a---- 10/25/2021 12:18 PM 0 synonyms.txt +``` + +_Note: your exact output may be slightly different._ + +## Setup the environment + +When using Solr v9 or later, the use of [Solr modules](https://solr.apache.org/guide/solr/latest/configuration-guide/solr-modules.html) is required. +We recommend using the environment variable option to specify the modules to use: + +```shell +SOLR_MODULES=analysis-extras +``` + +This environment variable needs to be available to the Solr instance at runtime. + +For instructions on how set an environment variable here are some recommended articles: + +- When using [linux](https://www.freecodecamp.org/news/how-to-set-an-environment-variable-in-linux) +- When using a [mac](https://phoenixnap.com/kb/set-environment-variable-mac) +- When using [windows](https://docs.oracle.com/cd/E83411_01/OREAD/creating-and-modifying-environment-variables-on-windows.htm#OREAD158). Note that on windows, the variable name should be: `SOLR_MODULES` and the variable value: `analysis-extras` + +## Setup a Solr core + +With the `configset` in place run the command to create an ArchivesSpace core: + +```bash +bin/solr start +``` + +Wait for Solr to start (running as a non-admin user): + +```shell +.\bin\solr start +"java version info is 11.0.12" +"Extracted major version is 11" +OpenJDK 64-Bit Server VM warning: JVM cannot use large page memory because it does not have enough privilege to lock pages in memory. +Waiting up to 30 to see Solr running on port 8983 +Started Solr server on port 8983. Happy searching! +``` + +You can check that Solr is running on [http://localhost:8983](http://localhost:8983). + +Now create the core: + +```shell +bin/solr create -c archivesspace -d archivesspace +``` + +You should see confirmation: + +```shell +"java version info is 11.0.12" +"Extracted major version is 11" + +Created new core 'archivesspace' +``` + +In the browser you should be able to access the [ArchivesSpace schema](http://localhost:8983/solr/#/archivesspace/files?file=schema.xml). + +## Disable the embedded server Solr instance (optional <= 3.1.1 only) + +Edit the ArchivesSpace config.rb file: + +```ruby +AppConfig[:enable_solr] = false +``` + +Note that doing this means that you will have to backup Solr manually. + +## Set the Solr url in your config.rb file + +This config setting should point to your Solr instance: + +```ruby +AppConfig[:solr_url] = "http://localhost:8983/solr/archivesspace" +``` + +If you are not running ArchivesSpace and Solr on the same server, update +`localhost` to your Solr address. + +By default, on startup, ArchivesSpace will check that the Solr configuration +appears to be correct and will raise an error if not. You can disable this check +by setting `AppConfig[:solr_verify_checksums] = false` in `config.rb`. + +Please note: if you're upgrading an existing installation of ArchivesSpace to use an external Solr, you will need to trigger a full re-index. +See [Indexes](/administration/indexes) for more details. + +--- + +You can now follow the instructions in the [Getting started](/administration/getting_started) section to start +your ArchivesSpace application. + +--- + +## Upgrading Solr + +If you are using an older version of Solr than is recommended you may need (if called out +in release notes) or want to upgrade. Before performing an upgrade it is recommended that you review: + +- [Solr upgrade notes](https://solr.apache.org/guide/solr/latest/upgrade-notes/solr-upgrade-notes.html) +- [ArchivesSpace's release notes](https://github.com/archivesspace/archivesspace/releases) + +You should also review this document as the installation steps may include +instructions that were not present in the past. For example, from Solr v9 there is a +requirement to use Solr modules with instructions to configure the modules using environment +variables. + +The crucial part will be ensuring that ArchivesSpace's schema is being used for the +ArchivesSpace Solr index. The config setting `AppConfig[:solr_verify_checksums] = true` +will perform a check on startup that confirms this is the case, otherwise ArchivesSpace +will not be able to start up. + +From ArchivesSpace 3.5+ `AppConfig[:solr_verify_checksums]` does not check the +`solrconfig.xml` file. Therefore you can make changes to it without ArchivesSpace failing +on startup. However, for an upgrade you will want to at least compare the ArchivesSpace +`solrconfig.xml` to the one that is in use in case there are changes that need to be made to +work with the upgraded-to version of Solr. For example the ArchivesSpace Solr v8 `solrconfig.xml` +will not work as is with Solr v9. + +After upgrading Solr you should trigger a full re-index. Instructions for this are in +[Indexes](/administration/indexes). diff --git a/src/content/docs/es/provisioning/tuning.md b/src/content/docs/es/provisioning/tuning.md new file mode 100644 index 0000000..b36f9f2 --- /dev/null +++ b/src/content/docs/es/provisioning/tuning.md @@ -0,0 +1,51 @@ +--- +title: Performance tuning +description: Guidance for performance tuning of the ArchivesSpace stack. +--- + +ArchivesSpace is a stack of web applications which may require special tuning in order to run most effectively. This is especially the case for institutions with lots of data or many simultaneous users editing metadata. +Keep in mind that ArchivesSpace can be hosted on multiple server, either in a [multitenant setup](/provisioning/clustering) or by deploying the various applications ( i.e. backend, frontend, public, solr, & indexer ) on separate servers. + +## Application Settings + +The application itself can tuned in numerous ways. It’s a good idea to read the [configuration documentation](/customization/configuration), as there are numerous settings that can be adjusted to fit your needs. + +An important thing to note is that since ArchivesSpace is a Java application, it’s possible to set the memory allocations used by the JVM. There are numerous articles on the internet full of information about what the optimal settings are, which will depend greatly on the load your server is experiencing and the hardware. It’s a good idea to monitor the application and ensure that it’s not hitting the top limits what you’ve set as the heap. + +These settings are: + +- ASPACE_JAVA_XMX : Maximum heap space ( maps to Java’s Xmx, default "Xmx1024m" ) +- ASPACE_JAVA_XSS : Thread stack size ( maps to Xss, default "Xss2m" ) +- ASPACE_GC_OPTS : Options used by the Java garbage collector ( default : "-XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:NewRatio=1" ) + +To modify these settings, Linux users can either export an environment variable ( e.g. $ export ASPACE_JAVA_XMX="Xmx2048m" ) or edit the archivesspace.sh startup script and modify the defaults. + +Windows users must edit the archivesspace.bat file. + +If you're having trouble with errors like `java.lang.OutOfMemoryError` try doubling the `ASPACE_JAVA_XMX`. On Linux you can do this either by setting an environment variable like `$ export ASPACE_JAVA_XMX="Xmx2048m"` or by editing archivsspace.sh: + +```shell +if [ "$ASPACE_JAVA_XMX" = "" ]; then + ASPACE_JAVA_XMX="-Xmx2048m" +fi +``` + +For Windows, you'll change archivesspace.bat: + +```shell +java -Darchivesspace-daemon=yes %JAVA_OPTS% -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:NewRatio=1 -Xss2m -X +mx2048m -Dfile.encoding=UTF-8 -cp "%GEM_HOME%\gems\jruby-rack-1.1.12\lib\*;lib\*;launcher\lib\*!JRUBY!" org.jruby.Main "la +uncher/launcher.rb" > "logs/archivesspace.out" 2>&1 +``` + +**NOTE: THE APPLICATION WILL NOT USE THE AVAILABLE MEMORY UNLESS YOU SET THE MAXIMUM HEAP SIZE TO ALLOCATE IT** For example, if your server has 4 gigs of RAM, but you haven’t adjusted the ArchivesSpace settings, you’ll only be using 1 gig. + +## MySQL + +The ArchivesSpace application can hit a database server rather hard, since it’s a metadata rich application. There are many articles online about how to tune a MySQL database. A good place to start is try something like [MySQL Tuner](http://mysqltuner.com/) or [Tuning Primer](https://rtcamp.com/tutorials/mysql/tuning-primer/) which can give good hints on possible tweaks to make to your MySQL server configuration. + +Keep a close eye on the memory available to the server, as well as your InnoDB buffer pool. + +## Solr + +The internet is full of many suggestions on how to optimize a Solr index. [Running an external Solr index](/provisioning/solr) can be beneficial to the performance of ArchivesSpace, since that moves the index to its own server. diff --git a/src/content/docs/es/release-notes/v4.0.0.md b/src/content/docs/es/release-notes/v4.0.0.md new file mode 100644 index 0000000..3324b7b --- /dev/null +++ b/src/content/docs/es/release-notes/v4.0.0.md @@ -0,0 +1,89 @@ +--- +title: v4.0.0 +--- + +## ArchivesSpace v4.0.0 Release Summary + +Major technical infrastructure upgrades and user interface improvements characterize this release. Key changes include: + +## Breaking Changes + +- **Breaking change**: [OAI identifiers now use colon separator between the namespace and identifier](#api-and-integration-updates) +- **Breaking change**: [Solr 9 now required](#major-infrastructure-updates) +- **Breaking change**: [the Sequence module has been removed from core ArchivesSpace](#plugins-and-configuration) + +## Major Infrastructure Updates + +- **Breaking change**: Solr 9 now required +- Upgraded to newer versions of: + - Bootstrap (4.3) + - jQuery (3.7.0) + - Rails (6.1.6) + - JRuby (9.3.x.x) + - Nokogiri (1.13.10) + - Sequel (5.9.0) +- Frontend and public development web server migrated from Jetty to Puma (6.4.2) +- Staff application CSS migrated from Less to Sass +- Java 8 no longer supported - requires Java 11 or 17 +- Docker now supported as recommended deployment method + +## Public User Interface Improvements + +- Collection organization sidebar can now be configured for left/right positioning in config.rb +- New information and options for large finding aids + - Displays percentage of loaded records in infinite scroll + - Option to load all children for a resource at once (vs infinite scroll) +- Search terms now highlighted in results +- Fixed bug causing extra lines in notes display +- Change PDF label from "Print" to "Download PDF" +- PDF uses Kurinto fonts by default +- Improved hyperlink display in classification descriptions + +## Staff Interface Enhancements + +- Bulk updater plugin now part of core application +- New ability to duplicate full resource or archival object records +- Enhanced spreadsheet importers + - Added new fields for digital objects to bulk Digital Object spreadsheet + - Location imports can include an owner repository + - Archival Object CSV imports now respect publication status + - New option to download partially completed digital object spreadsheet template +- Fixed agent merge preview page +- Improved staff plugins dropdown in repository settings +- Fixes to the Rapid Data Entry modal +- Fixed tooltip bugs +- Improved Jobs status layouts + +## EAD Export Changes + +- More fields have special character escaped +- Removed commas and period from langmaterial notes +- Leading XML tags in Revision Description will no longer cause invalid XML + +## Documentation and Testing + +- Launched new technical documentation site at docs.archivesspace.org +- Ported all Selenium tests to Capybara +- Added functionality for test failure screenshots + +## API and Integration Updates + +- **Breaking change**: OAI identifiers now use colon separator between the namespace and identifier + +## Security and Administration + +- New config.rb option to allow users with the Administrator role to access the system information page +- Added config.rb option for favicon display +- PUI PDFs will now include clearer error messages when generation fails +- Enhanced bulk import/update capabilities with new configuration options + +## Plugins and Configuration + +- **Breaking change**: the Sequence module has been removed from core ArchivesSpace + +## Community Contributions + +- 76 community contributions accepted +- 134 Pull Requests merged +- 146 Jira Tickets closed +- Contributions from multiple community members and organizations diff --git a/src/content/docs/fr/404.md b/src/content/docs/fr/404.md new file mode 100644 index 0000000..976d1cc --- /dev/null +++ b/src/content/docs/fr/404.md @@ -0,0 +1,9 @@ +--- +title: '404' +editUrl: false +lastUpdated: false +tableOfContents: false +hero: + title: '404' + tagline: Page not found. Check the URL or try searching for what you were looking for. +--- diff --git a/src/content/docs/fr/about/authoring.md b/src/content/docs/fr/about/authoring.md new file mode 100644 index 0000000..3b2b1c8 --- /dev/null +++ b/src/content/docs/fr/about/authoring.md @@ -0,0 +1,308 @@ +--- +title: Authoring content +description: This page outlines best practices for updating and writing markdown files for the tech-docs repository. +--- + +The Tech Docs site contains two types of content--documentation pages and blog posts. Both content types are written in [Markdown](https://en.wikipedia.org/wiki/Markdown) and define page-specific details as [yaml](https://yaml.org/) key:value pairs. + +Tech Docs uses [GitHub-flavored Markdown](https://github.github.com/gfm/), a variant of Markdown syntax, and [SmartyPants](https://daringfireball.net/projects/smartypants/), a typographic punctuation plugin. These tools provide authors niceties like generating clickable links from text, creating lists and tables, formatting for quotations and em-dashes, and more. + +## Where pages go + +### Documentation pages + +Documentation pages live under `src/content/docs/`. Each page is a `.md` or `.mdx` file. The URL path is `/` plus the file path relative to that directory, without the extension—for example, `src/content/docs/architecture/public.md` is served at `/architecture/public`. Nested folders add segments to the path. + +### Blog + +Blog posts live under `src/content/blog/` as `.md` or `.mdx` files. The URL is `/blog/` plus the path to the file relative to that folder, without the extension—for example, `src/content/blog/v4-2-0-release-candidate.md` is served at `/blog/v4-2-0-release-candidate`. Nested folders add path segments to the URL. + +Valid frontmatter and body content are required for the site to be built and published. + +## Markdown + +Common use of Markdown throughout Tech Docs includes: + +- [headings](#headings) +- [links](#links) +- [emphasizing text](#emphasizing-text) +- [paragraphs](#paragraphs) +- [lists](#lists) +- [code examples](#code-examples) +- [diagrams](#diagrams) +- [asides](#asides) +- [images](#images) + +### Headings + +Start a new line with between 2 and 6 `#` symbols, followed by a single space, and then the heading text. + +```md +## Example second-level heading +``` + +The number of `#` symbols corresponds to the heading level in the document hierarchy. **The first heading level is reserved for the page title** (available in the page [YAML frontmatter](#yaml-frontmatter)). Therefore the first _authored_ heading on every page should be a second level heading (`##`). + +:::note[Second level heading requirement] +Authored headings should start at the second level (`##`) on every page, since the first level (`#`) is reserved for the page title which is machine-generated. +::: + +```md +<!-- example.md --> + +## Second level heading + +Notice the page starts with a second level heading. + +Notice the blank lines above and below each heading. + +### Third level heading + +This is demo text under the Third level heading section. + +#### Fourth level heading + +##### Fifth level heading + +###### Sixth and final level heading +``` + +### Links + +Create a link by wrapping the link text in brackets (`[ ]`) immediately followed by the external link URL, or internal link path, wrapped in parentheses (`( )`). + +```md +[text](URL or path) +``` + +Be sure not to include any space between the wrapped text and URL. + +```md +<!-- example.md --> + +See the [TechDocs source code](https://github.com/archivesspace/tech-docs). +``` + +#### In documentation pages + +##### To other pages + +When linking to another Tech Docs documentation page, start with a forward slash (`/`), followed by the location of the page as found in the `src/content/docs/` directory, and omit the file extension (`.md`). + +```md +✅ [Public user interface](/architecture/public) + +❌ [Public user interface](architecture/public) +❌ [Public user interface](./architecture/public) +❌ [Public user interface](../architecture/public) +❌ [Public user interface](/architecture/public.md) +``` + +:::note[Internal link requirements] +Links to other Tech Docs documentation pages should: + +1. start with a forward slash (`/`) +2. reflect the location of the page as found in `src/content/docs/` +3. not include the file extension (`.md`) + +::: + +##### Within a page + +Starlight provides [automatic heading anchor links](https://starlight.astro.build/guides/authoring-content/#automatic-heading-anchor-links). To link to a section within a page, use the `#` symbol followed by the HTML `id` of the relevant section heading. + +```md +<!-- src/content/docs/about/authoring.md --> + +See the [Links](#links) section on this page. + +See the [Public configuration options](/architecture/public#configuration). +``` + +:::tip +A section heading's `id` is usually the same text string as the heading itself, but in all lowercase letters and with all single spaces converted to single hyphens. See the actual HTML `id` by right clicking on the heading to "inspect" it. +::: + +#### In blog posts + +When you write the body of a blog post, links to documentation pages use the same pattern as [in documentation pages](#to-other-pages): a leading `/` and the path under `src/content/docs/` without `.md`, for example `[Public user interface](/architecture/public)`. + +Links to another blog post use `/blog/` plus that post’s path under `src/content/blog/` without the extension—the same shape as its public URL (see [Blog](#blog) under [Where pages go](#where-pages-go)). For example, `src/content/blog/v4-2-0-release-candidate.md` is linked as `[v4.2.0 release candidate](/blog/v4-2-0-release-candidate)`. Nested folders add segments, for example `/blog/releases/v4-2-0` for `src/content/blog/releases/v4-2-0.md`. + +### Emphasizing text + +Wrap text to be emphasized with `_ ` for italics, `**` for bold, and `~~` for strikethrough. + +```md +<!-- example.md --> + +_Italicized_ text + +**Bold** text + +**_Bold and italicized_** text + +~~Strikethrough~~ text +``` + +### Paragraphs + +Create paragraphs by leaving a blank line between lines of text. + +```md +<!-- example.md --> + +This is one paragraph. + +This is another paragraph. +``` + +### Lists + +Precede each line in a list with a dash (`-`) for a bulleted list, or a number followed by a period (`1.`) for an ordered list. + +```md +<!-- example.md --> + +- Resource +- Digital Object +- Accession + +1. Accession +2. Digital Object +3. Resource +``` + +### Code examples + +Wrap inline code with a single backtick (`` ` ``). + +Wrap code blocks with triple backticks (` ``` `), also known as a "code fence", placed just above and below the code. Append the name of the code's language or its file extension to the first set of backticks for syntax highlighting. + +````md +<!-- example.md --> + +The `JSONModel` class is central to ArchivesSpace. + +```ruby +def h(str) + ERB::Util.html_escape(str) +end +``` +```` + +### Diagrams + +Tech Docs supports [Mermaid](https://mermaid.js.org/) diagrams in both documentation pages and blog posts. + +Use a fenced code block with `mermaid` as the language: + +````md +```mermaid +flowchart TD + A[Staff user edits record] --> B[Indexer updates Solr] + B --> C[Updated record in PUI] +``` +```` + +Rendered example: + +```mermaid +flowchart TD + A[Staff user edits record] --> B[Indexer updates Solr] + B --> C[Updated record in PUI] +``` + +### Asides + +Asides are useful for highlighting secondary or marketing information. + +Wrap content in a pair of triple colons (`:::`) and append one of the aside types (ie: `note`) to the first set of colons. The aside types are `note`, `tip`, `caution`, and `danger`, each of which have their own set of colors and icon. Customize the title by wrapping text in brackets (`[ ]`) placed after the note type. + +```md +<!-- example.md --> + +:::tip +Become an ArchivesSpace member today! 🎉 +::: + +:::note[Some custom title] + +### Markdown is supported in asides + +![Pic alt text](../../../../images/example.jpg) + +Lorem ipsum dolor sit amet consectetur, adipisicing elit. +::: +``` + +:::note +Asides are a custom Markdown feature provided by the underlying [Starlight framework](https://starlight.astro.build/guides/authoring-content/#asides) that builds the Tech Docs. +::: + +:::tip[Customize the aside title] +Customize the the aside title by wrapping text in brackets (`[ ]`) after the note type, in this case `tip`. +::: + +### Images + +Show an image using an exclamation point (`!`), followed by the image's [alt text](https://en.wikipedia.org/wiki/Alt_attribute) (a brief description of the image) wrapped in brackets (`[ ]`), followed by the external URL, or internal path, wrapped in parentheses (`( )`). + +```md +<!-- example.md --> + +![A dozen Krispy Kreme donuts in a box](https://example.com/donuts.jpg) + +![The ArchivesSpace logo](../../../../images/logo.svg) +``` + +:::note[Put images in `src/images`] +All internal images belong in the `src/images` directory. The relative path to images from this file is `../../../images`. +::: + +## YAML frontmatter + +Each content file starts with [YAML](https://yaml.org/) frontmatter: metadata in a block wrapped in triple dashes (`---`). Use the templates below so every field we rely on is set explicitly. For more on how the site build system reads these values, see [Documentation content collection and schema](/about/development#documentation-content-collection-and-schema) and [Blog content collection and schema](/about/development#blog-content-collection-and-schema) on the Development page. + +### Documentation pages + +```md +--- +title: Using MySQL +description: Instructions for how to set up MySQL with ArchivesSpace. +--- +``` + +- **`title`** — Page title shown in the layout, browser tab, and metadata. +- **`description`** — Short summary used for SEO, search, and social previews. + +### Blog posts + +```md +--- +title: v4.2.0 Release Candidate +metaDescription: Early access to ArchivesSpace v4.2.0-RC1 is now available. +teaser: ArchivesSpace <a href="https://github.com/archivesspace/archivesspace/releases/tag/v4.2.0-RC1">v4.2.0-RC1</a> has landed for early testing. +pubDate: 2026-03-20 +authors: + - Pat Doe +updatedDate: 2026-03-21 +--- +``` + +- **`title`** — Post headline on the post page and on the blog index. +- **`metaDescription`** — Short summary for page metadata (SEO) and for the index card when `teaser` is omitted. +- **`teaser`** — Text or HTML for the blog index card (links and light markup are common here). +- **`pubDate`** — Publication date; posts are ordered by this value, newest first. Use an ISO-style date (`YYYY-MM-DD`). +- **`authors`** — List of author names, shown comma-separated on the index and post page. +- **`updatedDate`** — Last-updated date in the same `YYYY-MM-DD` form when the post is revised after publication. + +## Image files + +All internal image files used in Tech Docs content should go in the `src/images` directory, located at the root of this project. + +## Writing conventions + +- Plugins, not plug-ins +- Titles are sentence-case ("Application monitoring with New Relic") +- Documentation page titles prefer '-ing' verb forms ("Using MySQL", "Serving over HTTPS") diff --git a/src/content/docs/fr/about/development.md b/src/content/docs/fr/about/development.md new file mode 100644 index 0000000..40771f9 --- /dev/null +++ b/src/content/docs/fr/about/development.md @@ -0,0 +1,318 @@ +--- +title: Development +description: This page describes how to set up the tech-docs repostory, build the website, update dependencies, and run tests +# This is the last page in the sidebar, so point to Home next instead of +# the Help Center which comes after this page in the sidebar +next: + link: / + label: Home +--- + +Tech Docs is a [Node.js](https://nodejs.org) application, built with [Astro](https://astro.build/) and its [Starlight](https://starlight.astro.build/) documentation site framework. The source code is hosted on [GitHub](https://github.com/archivesspace/tech-docs). The site is statically built and (temporarily) hosted via [Cloudflare Pages](https://pages.cloudflare.com/). Content is written in [Markdown](/about/authoring#markdown). When the source code changes, a new set of static files are generated and published shortly after. + +## Dependencies + +Tech Docs depends on the following open source software (see `.nvmrc` and `package.json` for versions): + +1. [Node.js](https://nodejs.org) - JavaScript development and build environment; the version noted in `.nvmrc` reflects the default version of Node.js in the Cloudflare Pages build image +2. [Astro](https://astro.build/) - Static site generator conceptually based on "components" (React, Vue, Svelte, etc.) rather than "templates" (Jekyll, Handlebars, Pug, etc.) + 1. [Starlight](https://starlight.astro.build/) - Astro plugin and theme for documentation websites + 2. [Sharp](https://sharp.pixelplumbing.com/) - Image transformation library used by Astro +3. [Cypress](https://www.cypress.io/) - End-to-end testing framework +4. [Stylelint](https://stylelint.io/) - CSS linter used locally in text editors and remotely in [CI](#cicd) for testing + 1. [stylelint-config-recommended](https://github.com/stylelint/stylelint-config-recommended) - Base set of lint rules + 2. [postcss-html](https://github.com/ota-meshi/postcss-html) - PostCSS syntax for parsing HTML (and HTML-like including .astro files) + 3. [stylelint-config-html](https://github.com/ota-meshi/stylelint-config-html) - Allows Stylelint to parse .astro files +5. [Prettier](https://prettier.io/) - Source code formatter used locally in text editors and remotely in [CI](#cicd) for testing + 1. [prettier-plugin-astro](https://github.com/withastro/prettier-plugin-astro) - Allows Prettier to parse .astro files via the command line + +## Local development + +Run Tech Docs locally by cloning the Tech Docs repository, installing project dependencies, and spinning up a development server: + +```sh +# Requires git and Node.js + +# Clone Tech Docs and move to it +git clone https://github.com/archivesspace/tech-docs.git +cd tech-docs + +# Install dependencies +npm install + +# Run dev server +npm start +``` + +Now go to [localhost:4321](http://localhost:4321) to see Tech Docs running locally. Changes to the source code will be immediately reflected in the browser. + +### Building the site + +Building the site creates a set of static files, found in `dist` after build, that can be served locally or deployed to a server. Sometimes building the site surfaces errors not found in the development environment. + +```sh +# Build the site and output it to dist/ +npm run build +``` + +:::tip +Serve the built output by running `npm run preview` after a build. +::: + +### Available `npm` scripts + +The following scripts are made available via `package.json`. Invoke any script on the command line from the project root by prepending it with the `npm run` command, ie: `npm run start`. + +- `start` -- run Astro dev server +- `build` -- build Tech Docs for production +- `preview` -- serve the static build +- `astro` -- get Astro help +- `test:dev` -- run tests in development mode +- `test:prod` -- run tests in production mode +- `test` -- defaults to run tests in production mode +- `prettier:check` -- check formatting with Prettier +- `prettier:fix` -- fix possible format errors with Prettier +- `stylelint:check` -- lint CSS with Stylelint +- `stylelint:fix` -- fix possible CSS lint errors with Stylelint + +## Documentation pages + +Documentation pages are implemented with Starlight’s `docs` content collection. Source files are in `src/content/docs/`, and Starlight generates their routes as part of the normal Astro static build output (no separate docs build step). Sidebar hierarchy is configured in `src/siteNavigation.json`. For copy-paste templates and short author-facing field guidance, see [YAML frontmatter](/about/authoring#yaml-frontmatter). + +### Adding documentation pages + +To add a new documentation page: + +1. Create a Markdown file in the appropriate docs section directory under `src/content/docs/`. +2. Add that page to `src/siteNavigation.json` in the correct section and in the correct order so it appears in the sidebar navigation as desired. +3. If the new page becomes the first page in its section, update the corresponding homepage hero link in `src/components/HomePage.astro` so the section link points to the new first page. + +### Legacy `index.md` pages + +Some section directories still contain legacy `index.md` pages from the old Tech Docs site. Those pages can still be routed (for example `/architecture` and `/architecture/index`), but they are not included in the sidebar since they are not listed in `src/siteNavigation.json`. + +### Documentation content collection and schema + +In `src/content.config.ts`, the `docs` collection uses `docsLoader()` and [Starlight’s frontmatter schema](https://starlight.astro.build/reference/frontmatter/) via `docsSchema()`, extended with `issueUrl` and `issueText`. Frontmatter is validated at build time. Starlight requires a `title`; other keys are optional unless your page has a specific need. + +| Field | Required | Purpose | +| ----------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `title` | Yes | Page title in the layout, browser tab, and metadata. | +| `description` | No | Short summary for SEO, search, and social previews. Most pages set this; it is omitted on a few pages (for example [Staff interface](/architecture/frontend), [404](/404)). | +| `slug` | No | Overrides the URL segment instead of deriving it from the file path. | +| `editUrl` | No | Overrides the “Edit page” URL, or `false` to hide the link (for example on [404](/404)). | +| `head` | No | Extra tags for the document head (meta, link, custom title, etc.). | +| `tableOfContents` | No | Table of contents: `false` to hide, or `{ minHeadingLevel, maxHeadingLevel }` to tune range. | +| `template` | No | Starlight layout template (for example `splash`). | +| `hero` | No | Hero area for splash-style pages (`title`, `tagline`, optional `image`, `actions`, etc.). | +| `banner` | No | Optional banner above the page content. | +| `lastUpdated` | No | Override the displayed last-updated date, or `false` to hide it. | +| `prev` | No | Previous pagination link: `false`, a string label, or `{ link, label }`. | +| `next` | No | Next pagination link: `false`, a string label, or `{ link, label }`. For example, [Development](/about/development) sets this so “next” goes to Home instead of the external Help Center entry after it in the sidebar. | +| `pagefind` | No | Set `false` to omit the page from the Pagefind index. | +| `draft` | No | When `true`, exclude the page from production builds. | +| `sidebar` | No | Per-page sidebar label, order, badge, `hidden`, or link `attrs`. The main sidebar structure is configured in `src/siteNavigation.json`. | +| `issueUrl` | No | URL for the footer “report an issue” link, or `false` to hide it. Defaults in `src/content.config.ts` when omitted; authors may set explicitly (see [YAML frontmatter](/about/authoring#yaml-frontmatter)). | +| `issueText` | No | Label text for that footer link. Defaults in `src/content.config.ts` when omitted; authors may set explicitly (see [YAML frontmatter](/about/authoring#yaml-frontmatter)). | + +### Documentation routes + +- URLs are derived from file paths in `src/content/docs/` unless `slug` is set in frontmatter. +- Previous/next pagination is derived from sidebar order unless `prev`/`next` are overridden in frontmatter. + +### Documentation UI components + +| Area | Location | +| -------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- | +| Sidebar hierarchy and grouping | `src/siteNavigation.json` | +| Default docs page title rendering | `src/components/CustomPageTitle.astro` (falls back to Starlight’s default `PageTitle` for non-blog routes) | +| Footer metadata/navigation (edit link, issue link, etc.) | `src/components/overrides/Footer.astro`, `src/components/overrides/EditLink.astro`, `src/components/IssueLink.astro` | + +### Documentation tests + +Documentation-page behavior is covered in Cypress, mainly `cypress/e2e/content-pages.cy.js` (sidebar, table of contents, footer metadata links, and pagination). + +## Blog + +The [blog](/blog) is implemented as an Astro content collection alongside the docs collection. Post source files are in `src/content/blog/`; routes live under `src/pages/blog/`. There is no separate blog build step—blog pages are part of the normal Astro static output, and site search ([Search](#search)) indexes them like other HTML. For where to put files and example frontmatter, see [Authoring content](/about/authoring#where-pages-go) and [YAML frontmatter](/about/authoring#yaml-frontmatter). + +### Adding blog posts + +To add a new blog post, create a new Markdown file in `src/content/blog/` with the required frontmatter fields (`title`, `metaDescription`, `pubDate`, and `authors`). + +Optional fields (`teaser` and `updatedDate`) can also be added as needed. No `src/siteNavigation.json` changes are required for blog posts; valid files in the collection are included automatically when the site builds. + +### Blog content collection and schema + +The `blog` collection is registered in `src/content.config.ts` with a Zod schema. Frontmatter is validated at build time. Adding or renaming frontmatter fields requires updating that schema and every consumer of `entry.data` (blog pages, middleware, and tests). + +| Field | Required | Purpose | +| ----------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `title` | Yes | Post headline on the post page and index card. May include HTML for display; the document `<title>` and prev/next pagination labels **strip HTML** from `title`. | +| `metaDescription` | Yes | Short summary for page meta description (SEO). Used as the index teaser text when `teaser` is omitted. | +| `teaser` | No | HTML or plain text for the blog index card (`set:html`). Prefer this for links or light HTML on the index; plain text in `title` is safest where tab titles and pagination matter. | +| `pubDate` | Yes | Publication date; posts are sorted by this field, newest first. Parsed from frontmatter and formatted for display in **UTC** on the index and post header. | +| `authors` | Yes | Array of author display names; shown comma-separated on the index and post page. | +| `updatedDate` | No | Optional revision date (`YYYY-MM-DD`). Stored in frontmatter but **not shown in the UI** today; useful for future display or consistency with the authoring template. | + +### Blog routes + +- `src/pages/blog/index.astro` — `/blog` index; loads posts, sorts by `pubDate` descending, passes data to the index UI. +- `src/pages/blog/[id].astro` — individual posts; `getStaticPaths` comes from the collection, so new valid posts appear on the next build. + +### Blog route middleware + +`src/blogRouteData.js` is Starlight route middleware for blog routes. It injects `pubDate`, `authors`, and `postTitle` for post pages and sets prev/next pagination (older post as “Previous,” newer as “Next”). Pagination labels use titles with HTML stripped. + +### Blog UI components + +| Area | Location | +| ------------------------------------ | ----------------------------------------------------------------------------- | +| Index list and cards | `src/components/BlogIndex.astro` | +| Index page title | `src/components/BlogIndexTitleHeader.astro` | +| Post title, date, authors, back link | `src/components/BlogPostTitleHeader.astro`, `src/components/BackToBlog.astro` | +| Default vs blog title | `src/components/CustomPageTitle.astro` | +| Header “Blog” link | `src/components/overrides/Header.astro` | +| Blog layout / sidebar behavior | `src/components/overrides/PageFrame.astro` | + +### Blog tests + +End-to-end coverage is in `cypress/e2e/blog.cy.js`. Update these tests when you change blog markup, URLs, or visible behavior. + +## Search + +Site search is a [Starlight feature](https://starlight.astro.build/guides/site-search/): + +> By default, Starlight sites include full-text search powered by [Pagefind](https://pagefind.app/), which is a fast and low-bandwidth search tool for static sites. +> +> No configuration is required to enable search. Build and deploy your site, then use the search bar in the site header to find content. + +:::note +Search only runs in production builds not in the dev server. +::: + +## Theme customization + +Starlight can be customized in various ways, including: + +- [Settings](https://starlight.astro.build/guides/customization/) -- see `astro.config.mjs` +- [CSS](https://starlight.astro.build/guides/css-and-tailwind/) -- see `src/styles/custom.css` +- [Components](https://starlight.astro.build/guides/overriding-components/) -- see `src/components` + +## Static assets + +### Images + +Most image files should be stored in `src/images`. This allows for [processing by Astro](https://docs.astro.build/en/guides/images/) which includes performance optimizations. + +Images that should not be processed by Astro, like favicons, should be stored in `public`. + +:::note[Use `src/images` for all content images] +Put all images used in Tech Docs content in `src/images`. +::: + +### The `public` directory + +Files placed in `public` are not processed by Astro. They are copied directly to the output and made available from the root of the site, so `public/favicon.svg` becomes available at `docs.archivesspace.org/favicon.svg`, while `public/example/slides.pdf` becomes available at `docs.archivesspace.org/example/slides.pdf`. + +## Mermaid diagrams + +Tech Docs supports Mermaid diagrams in both docs and blog content (for authoring syntax, see [Authoring content](/about/authoring#diagrams)). Mermaid is a text-to-diagram tool: authors write diagram definitions in a code fence, and Mermaid turns that text into SVG diagrams in the browser. This differs from regular fenced code blocks that Starlight renders with [Expressive Code](https://expressive-code.com/) as static syntax-highlighted code snippets. + +### Implementation + +1. Runtime logic lives in `src/lib/mermaid.ts`. +2. The runtime is loaded by the Starlight page frame override in `src/components/overrides/PageFrame.astro`. +3. Mermaid fences are post-processed at runtime and rendered as SVG diagrams. + +### Theme behavior + +- Mermaid theme is derived from the site theme (`data-theme` on `<html>`): + - dark mode => Mermaid `dark` + - non-dark modes => Mermaid `default` +- A `MutationObserver` in `src/lib/mermaid.ts` watches for `data-theme` changes and re-renders existing Mermaid diagrams so colors update after theme toggles. +- Mermaid text color is explicitly set in `initializeMermaidRuntime()` bor improved accessibility over its default styles: + - dark mode text: `#fff` + - light mode text: `#000` + +### Maintenance notes + +- If Starlight/Expressive Code markup changes in a future upgrade, update Mermaid selectors/parsing in `src/lib/mermaid.ts` (especially `pre[data-language="mermaid"]` and `.ec-line .code`). +- If layout-level script loading changes, keep `src/components/overrides/PageFrame.astro` loading `src/lib/mermaid.ts` on pages where markdown content appears. +- Keep Cypress coverage updated in `cypress/e2e/mermaid.cy.js` when Mermaid rendering behavior or markup changes. + +## Update npm dependencies + +Run the following commands locally to update the npm dependencies, then push the changes upstream. + +```sh +# List outdated dependencies +npm outdated + +# Update dependencies +npm update +``` + +## Import aliases + +Astro supports [import aliases](https://docs.astro.build/en/guides/imports/#aliases) which provide shortcuts to writing long relative import paths. + +```astro title="src/components/overrides/Example.astro" del="../../images" ins="@images" +--- +import relativeA from '../../images/A_logo.svg' // no alias +import aliasA from '@images/A_logo.svg' // alias +--- +``` + +## Sitemap + +Starlight has built-in [sitemap support](https://starlight.astro.build/guides/customization/#enable-sitemap) which is enabled via the top-level `site` key in `astro.config.mjs`. This key generates `/sitemap-index.xml` and `/sitemap-0.xml` when Tech Docs is [built](#building-the-site), and adds the sitemap link to the `<head>` of every page. `public/robots.txt` also points to the sitemap. + +## Testing + +### End-to-end + +Tech Docs uses [Cypress](https://www.cypress.io/) for end-to-end testing customizations made to the underlying Starlight framework and other project needs. End-to-end tests are located in `cypress/e2e`. + +Run the Cypress tests locally by first building and serving the site: + +```sh +# Build the site +npm run build + +# Serve the build output +npm run preview +``` + +Then **in a different terminal** initiate the tests: + +```sh +# Run the tests +npm test +``` + +### Code style + +Nearly all files in the Tech Docs code base get formatted by [Prettier](https://prettier.io/) to ensure consistent readability and syntax. Run Prettier locally to find format errors and automatically fix them when possible: + +```sh +# Check formatting of .md, .css, .astro, .js, .yml, etc. files +npm run prettier:check + +# Fix any errors that can be overwritten automatically +npm run prettier:fix +``` + +All CSS in .css and .astro files are linted by [Stylelint](https://stylelint.io/) to help avoid errors and enforce conventions. Run Stylelint locally to find lint errors and automatically fix them when possible: + +```sh +# Check all CSS +npm run stylelint:check + +# Fix any errors that can be overwritten automatically +npm run stylelint:fix +``` + +### CI/CD + +Before new changes are accepted into the code base, the [end-to-end](#end-to-end) and [code style](#code-style) tests need to pass. Tech Docs uses [GitHub Actions](https://docs.github.com/en/actions) for its continuous integration and continuous delivery (CI/CD) platform, which automates the testing and deployment processes. The tests are defined in yaml files found in `.github/workflows/` and are run automatically when new changes are proposed. diff --git a/src/content/docs/fr/administration/backup.md b/src/content/docs/fr/administration/backup.md new file mode 100644 index 0000000..688cf61 --- /dev/null +++ b/src/content/docs/fr/administration/backup.md @@ -0,0 +1,160 @@ +--- +title: Backup and recovery +description: Steps, commands, and advice for setting up your ArchivesSpace MySQL database and Solr index. Backups will ensure recovery in case of error or failure. +--- + +## Using the docker configuration package + +### Database backups + +The [Docker configuration package](/administration/docker) includes a mechanism that performs periodic backups of your MySQL database, +using: [databacker/mysql-backup](https://github.com/databacker/mysql-backup). It is by default configured to perform +a dump every two hours. See [configuration](https://github.com/databacker/mysql-backup/blob/master/docs/configuration.md) for more options. + +The automatically created backups are located in the [`backups` directory](/administration/docker/) of the docker configuration package. + +#### When using Docker + +You can explicitly create a dump of your dockerized database while the docker containers are running using the command on your host system shell: + +```shell +docker exec mysql mysqldump -u root -p123456 archivesspace | gzip > /tmp/db.$(date +%F.%H%M%S).sql.gz +``` + +#### When using Docker Desktop + +You can explicitly create a dump of your dockerized database while the docker containers are running using the command on the "Exec" tab of your mysql container: + +```shell +docker exec mysql mysqldump -u root -p123456 archivesspace | gzip > /tmp/db.$(date +%F.%H%M%S).sql.gz +``` + +You can then export the created database dump from the `/tmp` directory of your mysql container using the "Files" tab. + +## Managing your own backups + +Performing regular backups of your MySQL database is critical. ArchivesSpace stores +all of your records data in the database, so as long as you have backups of your +database then you can always recover from errors and failures. + +If you are running MySQL, the `mysqldump` utility can dump the database +schema and data to a file. It's a good idea to run this with the +`--single-transaction` option to avoid locking your database tables +while your backups run. It is also essential to use the `--routines` +flag, which will include functions and stored procedures in the +backup. The `mysqldump` utility is widely used, and there are many tutorials +available. As an example, something like this in your `crontab` would backup your +database twice daily: + +```shell +# Dump archivesspace database 6am and 6pm +30 06,18 * * * mysqldump -u as -pas123 archivesspace | gzip > ~/backups/db.$(date +%F.%H%M%S).sql.gz +``` + +You should store backups in a safe location. + +If you are running with the demo database (NEVER run the demo database in production), +you can create periodic database snapshots using the following configuration settings: + +```ruby +# In this example, we create a snapshot at 4am each day and keep +# 7 days' worth of backups +# +# Database snapshots are written to 'data/demo_db_backups' by +# default. +AppConfig[:demo_db_backup_schedule] = "0 4 \* \* \*" +AppConfig[:demo\_db\_backup\_number\_to\_keep] = 7 +``` + +Solr indexes can always be [recreated](administration/indexes/) from the contents of the +database. For large sites, where recreating the indexes would take too long, it is possible to [backup and restore solr indexes](https://solr.apache.org/guide/solr/latest/deployment-guide/backup-restore.html). +In that case, you also need to backup and restore the files used by the indexers to mark which part of the data is already indexed: + +``` +docker cp archivesspace:/archivesspace/data/indexer_state /tmp/indexer_state +docker cp archivesspace:/archivesspace/data/indexer_pui_state /tmp/indexer_pui_state +``` + +## Creating backups of your database using the provided script + +ArchivesSpace provides simple scripts for windows and unix-like systems for backing up the database to a `.zip` file. + +### When using the embedded demo database + +Note: _NEVER use the demo database in production._. You can run: + +```shell +scripts/backup.sh --output /path/to/backup-yyyymmdd.zip +``` + +and the script will generate a file containing a snapshot of the demo database. + +### When using MySQL + +If you are running against MySQL and have `mysqldump` installed, you +can provide the `--mysqldump` option. This will read the +database settings from your configuration file and add a dump of your +MySQL database to the resulting `.zip` file. + +```shell +scripts/backup.sh --mysqldump --output ~/backups/backup-yyyymmdd.zip +``` + +## Recovering from backup + +When recovering an ArchivesSpace installation from backup, you will +need to restore your database (either the demo database or MySQL). + +After restoring your database, it is recommended to [recreate your solr indexes](administration/indexes/) + +### Recovering your database + +#### When managing your own MySQL + +If you are using MySQL, recovering your database just requires loading +your `mysqldump` backup into an empty database. If you are using the +`scripts/backup.sh` script (described above), this dump file is named +`mysqldump.sql` in your backup `.zip` file. + +To load a MySQL dump file, follow the directions in _Set up your MySQL +database_ to create an empty database with the appropriate +permissions. Then, populate the database from your backup file using +the MySQL client: + +```sql +`mysql -uas -p archivesspace < mysqldump.sql`, where + `as` is the user name + `archivesspace` is the database name + `mysqldump.sql` is the mysqldump filename +``` + +You will be prompted for the password of the user. + +#### When using the demo database + +If you are using the demo database, your backup `.zip` file will +contain a directory called `demo_db_backups`. Each subdirectory of +`demo_db_backups` contains a backup of the demo database. To +restore from a backup, copy its `archivesspace_demo_db` directory back +to your ArchivesSpace data directory. For example: + +```shell +cp -a /unpacked/zip/demo_db_backups/demo_db_backup_1373323208_25926/archivesspace_demo_db \ +/path/to/archivesspace/data/ +``` + +#### When running on Docker + +If you are using the Docker configuration package to run ArchivesSpace you can restore a database dump onto your `archivesspace` MySQL database with the following command on your host system shell: + +```shell +docker exec mysql mysql -uas -pas123 archivesspace < /tmp/db.2025-02-26.164907.sql +``` + +##### When using Docker Desktop + +On docker Desktop, you can import your sql file into the `/tmp/` directrory using the "Files" tab of your mysql container. Afterwards, on the "Exec" tab run the command: + +```shell +gunzip -c /tmp/db.2026-02-17.155254.sql.gz | mysql -u as -pas123 archivesspace +``` diff --git a/src/content/docs/fr/administration/docker.md b/src/content/docs/fr/administration/docker.md new file mode 100644 index 0000000..8488c78 --- /dev/null +++ b/src/content/docs/fr/administration/docker.md @@ -0,0 +1,226 @@ +--- +title: Running with Docker +description: Instructions on setting up, running, and managing an ArchivesSpace installation using Docker. +--- + +## Docker images + +Starting with v4.0.0 ArchivesSpace officially supports using [Docker](https://www.docker.com/) as the easiest way to get up and running. Docker eases installing, upgrading, starting and stopping ArchivesSpace. It also makes it easy to setup ArchivesSpace as a system service that starts automatically on every reboot. + +If you prefer not to use Docker, another (more involved) way to get ArchivesSpace up and running is installing the latest [distribution `.zip` file](/getting_started/zip_distribution). + +ArchivesSpace Docker images are available on the [Docker hub](https://hub.docker.com/u/archivesspace). + +- main application images are built from [this Dockerfile](https://github.com/archivesspace/archivesspace/blob/master/Dockerfile) +- solr images are built from [this Dockerfile](https://github.com/archivesspace/archivesspace/blob/master/solr/Dockerfile) + +## Installing + +### System requirements + +ArchivesSpace on Docker has been tested on Ubuntu Linux, Mac OS X, and Windows. At least 1024 MB RAM are required. We recommend using at least 2 GB for optimal performance. + +### Software Dependencies + +When using Docker, the only software dependency is [Docker](https://www.docker.com/) itself. Follow the [instructions](https://docs.docker.com/get-started/get-docker/) to install the Docker engine. +Optionally installing [Docker Desktop](https://www.docker.com/products/docker-desktop/) provides a graphical way to manage, start and stop your docker containers, easily review the container logs etc. + +### Downloading the configuration package + +To run ArchivesSpace with Docker, first download the ArchivesSpace docker configuration package of the latest release from [github](https://github.com/archivesspace/archivesspace/releases) (scroll down to the "Assets" section of the latest release page and look for the zip file named `archivesspace-docker-${VERSION}.zip`). + +The downloaded configuration package contains a simple yet configurable and production ready docker-based setup intended to run on a single computer. + +### Contents of the configuration package + +Unzipping the downloaded file will create an `archivesspace` directory with the following contents: + +``` +. +├── backups +├── config +│ └── config.rb +├── locales +├── plugins +├── proxy-config +│ └── default.conf +├── sql +├── docker-compose.yml +├── stylesheets +└── .env +``` + +- The `backups` directory is first created once you start the application and it will contain the automatically performed backups of the database. See [Automated Backups section](#automated-database-backups). +- `config/config.rb` file contains the [main configuration](/customization/configuration/) of ArchivesSpace. +- The `locales` directory allows [customization of the UI text](/customization/locales/). +- The `plugins` directory is there to accommodate additional ArchivesSpace [plugins](/customization/plugins/). By default, it contains the [`local`](/customization/plugins/#adding-your-own-branding) and [`lcnaf`](https://github.com/archivesspace-plugins/lcnaf) plugins. +- `proxy-config/default.conf` contains the configuration of the bundled `nginx` see also [proxy configuration](#proxy-configuration). +- In the `sql` directory you can put your `.sql` database dump file to initialize the new database, see [next section](migrating-from-the-zip-distribution-to-docker). +- The `stylesheets` directory contain the files that are used to create PDFs and other files. +- `docker-compose.yml` contains all the information required by Docker to build and run ArchivesSpace. +- `.env` contains configuration of the docker containers including: + - Credentials used by archivespace to access its MySQL database. It is recommended to change the default root and user passwords to something safer. + - The database connection URI which should also be [updated accordingly](/customization/configuration/#database-config) after the database user password is updated in the step above. + +## Migrating from the zip distribution to docker + +If you are currently running ArchivesSpace using the zip file distribution, you can start using Docker instead. + +### Create a backup of your ArchivesSpace instance database + +Use `mysqldump` to create a dump of your MySQL database: + +```shell +mysqldump -uroot -p123456 -h 127.0.0.1 archivesspace > /tmp/db.$(date +%F.%H%M%S).sql +``` + +Follow the steps under the [Backup and recovery](/administration/backup/) section if you need more instructions on how create backups of your MySQL database. + +### Initialize and migrate the database on Docker + +Copy your `.sql` database dump file created above in the `sql` directory of your unzipped Docker configuration package. Make sure the filename includes the `.sql` extension. The file should be in plain text format (not zipped). +Docker will pick it up when it starts for the first time and restore the dump to your new database. + +If you created the dump on an earlier ArchivesSpace version, the system will apply any pending database migrations to upgrade your database to the ArchivesSpace version you are currently running on Docker. + +After the initial run you will want to remove that `.sql` file from the `sql` directory of your unzipped Docker configuration package. + +The docker configuration package already includes a configurable database backup mechanism for MySQL. Read more about it in the [backup and recovery section](/administration/backup/#using-the-docker-configuration-package). + +## Running + +### Resource limits + +We recommend allocating at least 2GB per container for optimal performance. If the host instance is devoted to running ArchivesSpace, it is advisable to configure no memory limit for Docker containers. + +When using Docker Desktop, a default memory limit is set to 50% of your host's memory. To increase the RAM and other resource limits when using Docker Desktop, see [the documentation](https://docs.docker.com/desktop/settings-and-maintenance/settings/#resources). + +When using Docker without Docker Desktop, no memory limit is set by default. See [Docker documenentation](https://docs.docker.com/engine/containers/resource_constraints/) if you need to set limits to the resources used by ArchivesSpace containers. + +### Note on migrating from the zip distribution + +If migrating from the zip distribution to Docker, you most probably have local MySQL and Solr instances running. Starting ArchivesSpace with Docker will start Docker-based MySQL and Solr instances. In order to avoid port binding conflicts, make sure that you stop your local MySQL and Solr instances before proceeding. + +### Start + +Open a terminal, change to the `archivespace` directory that contains the `docker-compose.yml` file and run: + +```shell +docker compose up --detach +``` + +The first time you start ArchivesSpace with Docker, the container images will be downloaded and configuration steps such as database setup and solr index initialization will be performed automatically. +It is expected that the whole process takes up to ten or even more minutes depending on the power of your machine and internet connection speed. **Note** if you are migrating from using the zip distribution to Docker and have already copied a dump of your database in the `sql` directory, initialization of the database and indexing it in solr can take a long time depending on the size of your data. + +Starting with the `--detach` option allows closing the terminal without stopping ArchivesSpace. Viewing the logs of running ArchivesSpace containers is possible in [Docker Desktop](https://www.docker.com/products/docker-desktop/) or in a terminal with: + +```shell +docker compose logs --follow +``` + +Watch the logs for the welcome message: + +``` +2024-12-04 18:42:17 archivesspace | ************************************************************ +2024-12-04 18:42:17 archivesspace | Welcome to ArchivesSpace! +2024-12-04 18:42:17 archivesspace | You can now point your browser to http://localhost:8080 +2024-12-04 18:42:17 archivesspace | ************************************************************ +``` + +Using the default proxy configuration, the Public User interface becomes available at http://localhost/ and the Staff User Interface at: http://localhost/staff/ (default login with: admin / admin) + +You can see the status of your running containers with: + +``` +docker ps +``` + +Which will give a listing like this: + +``` +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +6cd7114c1796 nginx:1.21 "/docker-entrypoint.…" 26 hours ago Up 29 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp proxy +9ed453c46a9f archivesspace/archivesspace:4.0.0 "/archivesspace/star…" 26 hours ago Up 29 minutes (healthy) 8080-8081/tcp, 8089-8090/tcp, 8092/tcp archivesspace +ec71dd3030b7 databack/mysql-backup:latest "/entrypoint dump" 26 hours ago Up 29 minutes db-backup +8b74aa374ec8 archivesspace/solr:4.0.0 "docker-entrypoint.s…" 26 hours ago Up 29 minutes 0.0.0.0:8983->8983/tcp, :::8983->8983/tcp solr +d2cf634744fe mysql:8 "docker-entrypoint.s…" 26 hours ago Up 29 minutes 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp mysql +``` + +If you have also [Docker Desktop](https://www.docker.com/products/docker-desktop/) installed, you can use it to start, stop and manage the ArchivesSpace containers after they have been created for the first time. Docker Desktop does have a built in terminal window that can be used to run Docker commands. + +### Stop + +The following commands need to run from `archivespace` directory that contains the `docker-compose.yml` file. You can stop running containers (without deleting) them with the command: + +```shell +docker compose stop +``` + +They can be started again with: + +```shell +docker compose up --detach +``` + +### Start a shell within a container to run the provided scripts + +You can get a `bash` shell on the container running the archivespace application and run the any of the scripts in the scripts directory with: + +```shell +$ docker exec -it archivesspace bash +archivesspace@9ed453c46a9f:/$ cd archivesspace/scripts/ +archivesspace@9ed453c46a9f:/archivesspace/scripts$ ls +backup.bat backup.sh ead_export.bat ead_export.sh find-base.sh initialize-plugin.bat initialize-plugin.sh password-reset.bat password-reset.sh rb setup-database.bat setup-database.sh +archivesspace@9ed453c46a9f:/archivesspace/scripts$ ./setup-database.sh +NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.base/java.io=ALL-UNNAMED +Loading ArchivesSpace configuration file from path: /archivesspace/config/config.rb +Loading ArchivesSpace configuration file from path: /archivesspace/config/config.rb +Loading ArchivesSpace configuration file from path: /archivesspace/config/config.rb +Detected MySQL connector 8+ +Running migrations against jdbc:mysql://db:3306/archivesspace?useUnicode=true&characterEncoding=UTF-8&user=[REDACTED]&password=[REDACTED]&useSSL=false&allowPublicKeyRetrieval=true&serverTimezone=UTC +All done. +``` + +### Copy files from and to your data directory + +The archivespace `data` directory is not exposed in the Docker Configuration package (as are `locales`, `config`, and `locales` making them easily accessible). This is due to issues we have had on Windows when exposing +the `data` directory instead of using a Docker volume for it. + +If you need to copy files from/to the `data` directory, or any other directory of the archivesspace installation, you can use [`docker cp`](https://docs.docker.com/reference/cli/docker/container/cp/) commands, such as: + +```shell +docker cp archivesspace:/archivesspace/data/indexer_state /tmp/indexer_state +docker cp ~/Desktop/test.png archivesspace:/archivesspace/data +``` + +## Automated database backups + +The Docker configuration package includes a mechanism that will perform periodic backups of your MySQL database, see the [Backup and Recovery](/administration/backup/#backups-when-using-the-docker-configuration-package) for more information. + +## Proxy Configuration + +The Docker configuration package includes an `nginx` based proxy that is by default binding on port 80 of the host machine (see `NGINX_PORT` variable in `.env` file). See `proxy-config/default.conf` and the [nginx docker page](https://hub.docker.com/_/nginx) for more configuration options. + +## Upgrading + +If you are already using the Docker configuration package and upgrading to a newer ArchivesSpace version, [download and extract](#downloading-the-configuration-package) the latest version of the Docker configuration package. + +### With solr configuration / schema changes + +If the ArchivesSpace version you are upgrading to includes solr configuration or schema changes (see the [release notes](https://github.com/archivesspace/archivesspace/releases)), then you need to recreate your solr core and re-index. Change to the `archivespace` directory where you extraced the fresh downloaded Docker configuration package and run: + +```shell +docker compose down solr app +docker volume rm archivesspace_app-data archivesspace_solr-data +docker compose pull +docker compose up -d --build --force-recreate +``` + +### Without solr configuration / schema changes + +If no solr configuration or schema changes are included, change to the extracted `architecture` directory and run: + +```shell +docker compose pull +docker compose up -d --build --force-recreate +``` diff --git a/src/content/docs/fr/administration/getting_started.mdx b/src/content/docs/fr/administration/getting_started.mdx new file mode 100644 index 0000000..5572750 --- /dev/null +++ b/src/content/docs/fr/administration/getting_started.mdx @@ -0,0 +1,143 @@ +--- +title: Getting started +description: Detailed hardware and software requirements for running ArchivesSpace, including instructions on setting up and running an ArchivesSpace instance using the latest distribution .zip file. +--- + +import LatestReleaseBlurb from '@components/LatestReleaseBlurb.astro' + +## The latest release + +<LatestReleaseBlurb /> + +## Two installation methods + +There are two different ways to install ArchivesSpace: + +- Using Docker +- Using the `.zip` file distribution + +### Using Docker + +See the [Running with Docker](/administration/docker/) page for instructions on how to install ArchivesSpace using Docker. + +Starting with ArchivesSpace v4.0.0, the easiest and recommended way to get up and running is using Docker. This method eases installing, upgrading, starting, and stopping ArchivesSpace. It also makes it easy to setup ArchivesSpace as a system service that starts automatically on every reboot. + +### Using the `.zip` file distribution + +The older and more involved way is to install from the latest distribution `.zip` file as described below. + +#### System requirements + +##### Operating system + +ArchivesSpace is being tested on Ubuntu Linux, Mac OS X, and Windows. + +##### Memory + +At least 1024 MB RAM allocated to the application are required. We recommend using at least 2 GB for optimal performance. + +#### Software requirements + +When using the zip distribution, a Java runtime environment and a Solr instance are required. See [using Docker](/administration/docker/) to avoid these dependencies. + +##### Java Runtime Environment + +We recommend using [OpenJDK](https://openjdk.org/projects/jdk/). The following table lists the supported Java versions for each version of ArchivesSpace: + +| ArchivesSpace version | OpenJDK version | +| --------------------- | --------------- | +| ≤ v3.5.1 | 8 or 11 | +| v4.0.0 up to v4.1.1 | 11 or 17 | +| ≥ v4.2.0 | 17 or 21 | + +The Jruby version used in ArchivesSpace v4.2.0 is still compatible with java 11 we highly recommend using Java 17 or 21 as those are the Java versions ArchivesSpace v4.2.0 has been tested with. You can still use java 11 with v4.2.0 but the ArchivesSpace Program Team can provide support for environments using Java versions we have tested ArchivesSpace with (17 or 21). + +Note that in the next major release we expect to drop support for java 17 and only support java 21 and 25. + +##### Solr + +Up to ArchivesSpace v3.1.1, the zip file distribution includes an embedded Solr v4 instance, which is deprecated and not supported anymore. Use the Docker images provided on [ArchivesSpace Docker repository](https://hub.docker.com/orgs/archivesspace/repositories) and see also [using Docker](/administration/docker/) to avoid managing an external Solr instance. + +ArchivesSpace v3.2.0 or above requires an external Solr instance when running using the Zip distribution. The table below summarizes the supported Solr versions for each ArchivesSpace version: + +| ArchivesSpace version | External Solr version | +| --------------------- | ------------------------- | +| ≤ v3.1.1 | no external solr required | +| v3.2.0 up to v3.5.1 | 8 (8.11) | +| v4.0.0 up to v4.1.1 | 9 (9.4.1) | +| ≥ v4.2.0 | 9 (9.9.0) | + +Each ArchivesSpace version is tested for compatibility with the corresponding Solr version listed in the table above. Using the corresponding version of Solr is recommended as that version is being used during development and running the ArchivesSpace automated tests. + +If you need to use ArchivesSpace with an older version of Solr check the [release notes](https://github.com/archivesspace/archivesspace/releases) for any potential version compatibility issues. + +**Note: the ArchivesSpace Program Team can only provide support for Solr deployments +using the "officially" supported version with the standard configuration provided by +the application. Everything else will be treated as "best effort" community-led support.** + +See [Running with external Solr](/provisioning/solr) for more information on installing and upgrading Solr. + +##### Database + +While ArchivesSpace does include an embedded database, MySQL is required for production use. + +(While not officially supported by ArchivesSpace, some community members use MariaDB so there is some community support for version 10.4.10 only.) + +**The embedded database is for testing purposes only. You should use MySQL or MariaDB for any data intended for production, including data in a test instance that you intend to move over to a production instance.** + +All ArchivesSpace versions can run on MySQL version 5.x or 8.x. + +#### Install and run + +Download the distribution `.zip` for your version from [ArchivesSpace releases on GitHub](https://github.com/archivesspace/archivesspace/releases). + +Confirm a supported Java version is active on your PATH: + +```sh +java -version +``` + +Compare the output with [Java Runtime Environment](#java-runtime-environment). If needed, install a supported OpenJDK or point your environment at one (avoid using an unsupported newer Java as the default). + +Extract the `.zip`; it creates a directory named `archivesspace`. Before starting ArchivesSpace, finish provisioning: + +- [MySQL](/provisioning/mysql) +- JDBC driver: [Download MySQL Connector](/provisioning/mysql/#download-mysql-connector) +- External [Solr](/provisioning/solr) when your version requires it (ArchivesSpace v3.2.0 and later on the zip distribution; see [Solr](#solr)) + +**Do not proceed until MySQL and Solr (when required) are running.** + +Start ArchivesSpace from that directory. On Linux and macOS: + +```shell +cd /path/to/archivesspace +./archivesspace.sh +``` + +On Windows: + +```shell +cd \path\to\archivesspace +archivesspace.bat +``` + +This runs ArchivesSpace in the foreground (it stops when you close the terminal). By default, logs are written to `logs/archivesspace.out`. + +**Note:** On Windows, errors such as `unable to resolve type 'size_t'` or `no such file to load -- bundler` often mean the path to the `archivesspace` folder contains spaces. Use a path without spaces. + +##### Verify and sign in + +The first startup can take about a minute. Then confirm the services in a browser: + +- http://localhost:8089/ — backend +- http://localhost:8080/ — staff interface +- http://localhost:8081/ — public interface +- http://localhost:8082/ — OAI-PMH server +- http://localhost:8090/ — Solr admin console + +In the staff interface, sign in with the default administrator account: + +- Username: `admin` +- Password: `admin` + +Create a repository via **System** → **Manage repositories** (top right). From **System** you can manage users and other administration tasks. **Change the default `admin` password before production use.** diff --git a/src/content/docs/fr/administration/index.md b/src/content/docs/fr/administration/index.md new file mode 100644 index 0000000..91ff590 --- /dev/null +++ b/src/content/docs/fr/administration/index.md @@ -0,0 +1,13 @@ +--- +title: Administration basics +description: Index of the administration pages for the tech-docs website. +--- + +- [Getting started](./getting_started) +- [Running ArchivesSpace as a Unix daemon](./unix_daemon) +- [Running ArchivesSpace as a Windows service](./windows) +- [Backup and recovery](./backup) +- [Re-creating indexes](./indexes) +- [Resetting passwords](./passwords) +- [Upgrading](./upgrading) +- [Log rotation](./logrotate) diff --git a/src/content/docs/fr/administration/indexes.md b/src/content/docs/fr/administration/indexes.md new file mode 100644 index 0000000..aef049f --- /dev/null +++ b/src/content/docs/fr/administration/indexes.md @@ -0,0 +1,86 @@ +--- +title: Recreating indexes +description: Steps for performing soft reindexes and full reindexes of Solr, including internal and external Solr. +--- + +There are two strategies for reindexing ArchivesSpace: + +- soft reindex +- full reindex + +## Soft reindex + +A soft reindex updates the existing documents in Solr without directly +touching the actual index documents on the filesystem. This can be done +while the system is running and is suitable for most use cases. + +There are two common ways to perform a soft reindex: + +1. Delete indexer state files + +ArchivesSpace keeps track of what has been indexed by using the files +under `data/indexer_state` and `data/indexer_pui_state` (for the PUI). + +If these files are missing, the indexer assumes that nothing has been +indexed and reindexes everything. To force ArchivesSpace to reindex all +records, just delete the files in `/path/to/archivesspace/data/indexer_state` +and `/path/to/archivesspace/data/indexer_pui_state`. + +You also can do this selectively by record type, for example, to reindex +accessions in repository 2 delete the file called `2_accession.dat`. + +2. Bump `system_mtime` values in the database + +If you update a record's `system_mtime` it becomes eligible for reindexing. + +```sql +#reindex all resources +UPDATE resource SET system_mtime = NOW(); +#reindex resource 1 +UPDATE resource SET system_mtime = NOW() WHERE id = 1; +``` + +## Full reindex + +A full reindex is a complete rebuild of the index from the database. This +may be required if you are having indexer issues, in the case of index +corruption, or if called for by an upgrade owing to changes in ArchivesSpace's +Solr configuration. + +To perform a full reindex: + +### ArchivesSpace <= 3.1.0 (embedded Solr) + +- Shutdown ArchivesSpace +- Delete these directories: + - `rm -rf /path/to/archivesspace/data/indexer_state/` + - `rm -rf /path/to/archivesspace/data/indexer_pui_state/` + - `rm -rf /path/to/archivesspace/data/solr_index/` +- Restart ArchivesSpace + +### ArchivesSpace > 3.1.0 (external Solr) + +For external Solr there is a plugin that can perform all of the re-indexing steps: [aspace-reindexer](https://github.com/lyrasis/aspace-reindexer) + +Manual steps: + +- Shutdown ArchivesSpace +- Delete these directories: + - `rm -rf /path/to/archivesspace/data/indexer_state/` + - `rm -rf /path/to/archivesspace/data/indexer_pui_state/` +- Perform a delete all Solr query: + - `curl -X POST -H 'Content-Type: application/json' --data-binary '{"delete":{"query":"*:*" }}' http://${solrUrl}:${solrPort}/solr/archivesspace/update?commit=true` + - Windows PowerShell: + ``` + Invoke-RestMethod -Uri "http://localhost:8983/solr/archivesspace/update?commit=true" + -Method Post + -ContentType "application/json" + -Body '{"delete":{"query":"*:*"}}' + ``` +- Restart ArchivesSpace + +--- + +You can watch the [Tips for indexing ArchivesSpace](https://www.youtube.com/watch?v=yFJ6yAaPa3A) youtube video to see these steps performed. + +--- diff --git a/src/content/docs/fr/administration/logrotate.md b/src/content/docs/fr/administration/logrotate.md new file mode 100644 index 0000000..d96ce90 --- /dev/null +++ b/src/content/docs/fr/administration/logrotate.md @@ -0,0 +1,28 @@ +--- +title: Log rotation +description: Details an example of how to set up log rotation, which helps keep the ArchivesSpace log file from growing excessively. +--- + +In order to prevent your ArchivesSpace log file from growing excessively, you can set up log rotation. How to set up log rotation is specific to your institution but here is an example logrotate config file with an explanation of what it does. + +`/etc/logrotate.d/` + +``` + /<install location>/archivesspace/logs/archivesspace.out { + daily + rotate 7 + compress + notifempty + missingok + copytruncate + } +``` + +this example configuration file: + +- rotates the logs daily +- keeps 7 days worth of logs +- compresses the logs so they take up less space +- ignores empty logs +- does not report errors if the log file is missing +- creates a copy of the original log file for rotation before truncating the contents of the original file diff --git a/src/content/docs/fr/administration/passwords.md b/src/content/docs/fr/administration/passwords.md new file mode 100644 index 0000000..088336b --- /dev/null +++ b/src/content/docs/fr/administration/passwords.md @@ -0,0 +1,16 @@ +--- +title: Resetting passwords +description: How to run a script that resets a user's password within ArchivesSpace. +--- + +Under the `scripts` directory you will find a script that lets you +reset a user's password. You can invoke it as: + +``` +scripts/password-reset.sh theusername newpassword # or password-reset.bat under Windows +``` + +If you are running against MySQL, you can use this command to set a +password while the system is running. If you are running against the +demo database, you will need to shutdown ArchivesSpace before running +this script. diff --git a/src/content/docs/fr/administration/unix_daemon.md b/src/content/docs/fr/administration/unix_daemon.md new file mode 100644 index 0000000..ba8d9d3 --- /dev/null +++ b/src/content/docs/fr/administration/unix_daemon.md @@ -0,0 +1,60 @@ +--- +title: Running as a Unix daemon +description: Steps for running ArchivesSpace in the background as a daemon using the startup script, and additional info on configuring startup/init settings. +--- + +The `archivesspace.sh` startup script doubles as an init script. If +you run: + +``` +archivesspace.sh start +``` + +ArchivesSpace will run in the background as a daemon (logging to +`logs/archivesspace.out` by default, as before). You can shut it down with: + +``` +archivesspace.sh stop +``` + +You can even install it as a system-wide init script by creating a +symbolic link: + +``` +cd /etc/init.d +ln -s /path/to/your/archivesspace/archivesspace.sh archivesspace +``` + +Note: By default ArchivesSpace will overwrite the log file when restarted. You +can change that by modifying `archivesspace.sh` and changing the `$startup_cmd` +to include double greater than signs: + +``` +$startup_cmd &>> \"$ARCHIVESSPACE_LOGS\" & +``` + +Then use the appropriate tool for your distribution to set up the +run-level symbolic links (such as `chkconfig` for RedHat or +`update-rc.d` for Debian-based distributions). + +Note that you may want to edit archivesspace.sh to set the account +that the system runs under, JVM options, and so on. + +For systems that use systemd you may wish to use a Systemd unit file for ArchivesSpace + +Something similar to this should work: + +``` +[Unit] +Description=ArchivesSpace Application +After=syslog.target network.target +[Service] +Type=forking +ExecStart=/path/to/your/archivesspace/archivesspace.sh start +ExecStop=/path/to/your/archivesspace/archivesspace.sh stop +PIDFile=/path/to/your/archivesspace/archivesspace.pid +User=archivesspace +Group=archivesspace +[Install] +WantedBy=multi-user.target +``` diff --git a/src/content/docs/fr/administration/upgrading.md b/src/content/docs/fr/administration/upgrading.md new file mode 100644 index 0000000..9c5376d --- /dev/null +++ b/src/content/docs/fr/administration/upgrading.md @@ -0,0 +1,183 @@ +--- +title: Upgrading when using the zip distribution +description: Instructions on how to update ArchivesSpace. +--- + +If you have installed ArchivesSpace using the Docker Configuration Package, refer to [upgrading with Docker](/administration/docker/#upgrading). If you have installed ArchivesSpace using the zip distribution, read on! (In case you do not know what the difference is, see the [getting started page](/administration/getting_started/#two-ways-to-get-up-and-running)). + +You can upgrade most versions of ArchivesSpace to a later version using these general instructions. Typically you do not need to progress through other versions of ArchivesSpace to get to a later one, unless there are special considerations for a specific version. Special considerations for these versions are noted here and in release notes. + +- **[Special considerations when upgrading to v1.1.0](/administration/upgrading_1_1_0)** +- **[Special considerations when upgrading to v1.1.1](/administration/upgrading_1_1_1)** +- **[Special considerations when upgrading from v1.4.2 to 1.5.x (these considerations also apply when upgrading from 1.4.2 to any version through 2.0.1)](/administration/upgrading_1_5_0)** +- **[Special considerations when upgrading to 2.1.0](/administration/upgrading_2_1_0)** +- **[Changing to external Solr when upgrading to 3.2.0 or later versions](https://docs.archivesspace.org/provisioning/solr/).** + +## Create a backup of your ArchivesSpace instance + +You should make sure you have a working backup of your ArchivesSpace +installation before attempting an upgrade. Follow the steps +under the [Backup and recovery section](/administration/backup) to do this. + +## Unpack the new version + +It's a good idea to unpack a fresh copy of the version of +ArchivesSpace you are upgrading to. This will ensure that you are +running the latest versions of all files. In the examples below, +replace the lower case x with the version number updating to. For example, +1.5.2 or 1.5.3. + +For example, on Mac OS X or Linux: + +```shell +$ mkdir archivesspace-1.5.x +$ cd archivesspace-1.5.x +$ curl -LJO https://github.com/archivesspace/archivesspace/releases/download/v1.5.x/archivesspace-v1.5.x.zip +$ unzip -x archivesspace-v1.5.x.zip +``` + +( The curl step is optional and simply downloads the distribution from github. You can also +simply download the zip file in your browser and copy it to the directory ) + +On Windows, you can do the same by extracting ArchivesSpace into a new +folder you create in Windows Explorer. + +## Shut down your ArchivesSpace instance + +To ensure you get a consistent copy, you will need to shut down your +running ArchivesSpace instance now. + +## Copy your configuration and data files + +You will need to bring across the following files and directories from +your original ArchivesSpace installation: + +- the `data` directory (see **Indexes note** below) +- the `config` directory (see **Configuration note** below) +- your `lib/mysql-connector*.jar` file (if using MySQL) +- any plugins and local modifications you have installed in your `plugins` directory + +For example, on Mac OS X or Linux: + +```shell +$ cd archivesspace-1.5.x/archivesspace +$ cp -a /path/to/archivesspace-1.4.2/archivesspace/data/* data/ +$ cp -a /path/to/archivesspace-1.4.2/archivesspace/config/* config/ +$ cp -a /path/to/archivesspace-1.4.2/archivesspace/lib/mysql-connector* lib/ +$ cp -a /path/to/archivesspace-1.4.2/archivesspace/plugins/local plugins/ +$ cp -a /path/to/archivesspace-1.4.2/archivesspace/plugins/wonderful_plugin plugins/ +``` + +Or on Windows: + +``` +$ cd archivesspace-1.5.x\archivesspace +$ xcopy \path\to\archivesspace-1.4.2\archivesspace\data\* data /i /k /h /s /e /o /x /y +$ xcopy \path\to\archivesspace-1.4.2\archivesspace\config\* config /i /k /h /s /e /o /x /y +$ xcopy \path\to\archivesspace-1.4.2\archivesspace\lib\mysql-connector* lib /i /k /h /s /e /o /x /y +$ xcopy \path\to\archivesspace-1.4.2\archivesspace\plugins\local plugins\local /i /k /h /s /e /o /x /y +$ xcopy \path\to\archivesspace-1.4.2\archivesspace\plugins\wonderful_plugin plugins\wonderful_plugin /i /k /h /s /e /o /x /y +``` + +Note that you may want to preserve the logs file (`logs/archivesspace.out` +by default) from your previous installation--just in case you need to +refer to it later. + +### Configuration note + +Sometimes a new release of ArchivesSpace will introduce new +configuration settings that weren't present in previous releases. +Before you replace the distribution `config/config.rb` with your +original version, it's a good idea to review the distribution version +to see if there are any new configuration settings of interest. + +Upgrade notes will generally draw attention to any configuration +settings you need to set explicitly, but you never know when you'll +discover a new, exciting feature! Documentation might also refer to +uncommenting configuration options that won't be in your file if you +keep your older version. + +### Indexes note + +Sometimes a new release of ArchivesSpace will require a FULL reindex +which means you do not want to copy over anything from your data directory +to your new release. The data directory contains the indexes created by Solr. +Check the release notes of the new version for any details about reindexing and +the [recreating indexes section](/administration/indexes/) for instructions on recreating indexes. + +## Transfer your locales data + +If you've made modifications to your locales file ( en.yml ) with customized +labels, titles, tooltips, etc., you'll need to transfer those to your new +locale file. + +A good way to do this is to use a Diff tool, like Notepad++, TextMate, or just +Linux diff command: + +```shell +$ diff /path/to/archivesspace-1.4.2/locales/en.yml /path/to/archivesspace-1.5.x/archivesspace/locales/en.yml +$ diff /path/to/archivesspace-1.4.2/locales/enums/en.yml /path/to/archivesspace-v1.5.x/archivesspace/locales/enums/en.yml +``` + +This will show you the differences in your current locales files, as well as the +new additions in the new version locales files. Simply copy the values you wish +to keep from your old ArchivesSpace locales to your new ArchivesSpace locales/provisioning/solr/#copy-the-config-files +files. + +## Run the database migrations + +With everything copied, the final step is to run the database +migrations. This will apply any schema changes and data migrations +that need to happen as a part of the upgrade. To do this, use the +`setup-database` script for your platform. For example, on Mac OS X +or Linux: + +```shell +$ cd archivesspace-1.5.x/archivesspace +$ scripts/setup-database.sh +``` + +Or on Windows: + +```shell +$ cd archivesspace-1.5.x\archivesspace +$ scripts\setup-database.bat +``` + +## Solr configuration updates + +If the release you are upgrading to includes updates in the solr schema or other configuration files (see the release notes) +and you're using external Solr (required beginning with version 3.2.0), you will need to update the solr schema and configuration files +accordingly, by [copying the solr configuration files](/provisioning/solr/#copy-the-config-files) from the release package to your external solr configuration. +See also the [Full instructions for using external Solr with ArchivesSpace](/provisioning/solr). + +## If you've deployed to Tomcat + +The steps to deploy to Tomcat are esentially the same as in the +[archivesspace_tomcat](https://github.com/archivesspace-labs/archivesspace_tomcat) + +But, prior to running your setup-tomcat script, you'll need to be sure to clean out the +any libraries from the previous ASpace version from your Tomcat classpath. + + 1. Stop Tomcat + 2. Unpack your new version of ArchivesSpace + 3. Configure your MySQL database in the config.rb ( just like in the + install instructions ) + 4. Make sure all you other local configuration settings are in your + config.rb file ( check your Tomcat conf/config.rb file for your current + settings. ) + 5. Make sure you MySQL connector jar in the lib directory + 6. Run your setup-database script to migration your database. + 7. Delete all ASpace related jar libraries in your Tomcat's lib directory. These + will include the "gems" folder, as well as "common.jar" and some + [others](https://github.com/archivesspace/archivesspace/tree/master/common/lib). + This will make sure your running the correct version of the dependent + libraries for your new ASpace version. + Just be sure not to delete any of the Apache Tomcat libraries. + 8. Run your setup-tomcat script ( just like in the install instructions ). + This will copy all the files over to Tomcat. + 9. Start Tomcat + +## That's it! + +You can now start your new ArchivesSpace version as normal. diff --git a/src/content/docs/fr/administration/upgrading_1_1_0.md b/src/content/docs/fr/administration/upgrading_1_1_0.md new file mode 100644 index 0000000..868b49f --- /dev/null +++ b/src/content/docs/fr/administration/upgrading_1_1_0.md @@ -0,0 +1,62 @@ +--- +title: Upgrading to 1.1.0 +description: Special considerations when upgrading from ArchivesSpace 1.0.9 or less to 1.1.0, including the option for an external Solr instance. +--- + +Additional upgrade considerations specific to this release. Refer to the [upgrade documentation](/administration/upgrading) for the standard instructions that apply in all cases. + +## External Solr + +--- + +In ArchivesSpace 1.0.9 the default ports configuration was: + +```ruby +AppConfig[:backend_url] = "http://localhost:8089" +AppConfig[:frontend_url] = "http://localhost:8080" +AppConfig[:solr_url] = "http://localhost:8090" +AppConfig[:public_url] = "http://localhost:8081" +``` + +With the introduction of the [optional external Solr instance](/provisioning/solr) functionality this has been updated to: + +```ruby +AppConfig[:backend_url] = "http://localhost:8089" +AppConfig[:frontend_url] = "http://localhost:8080" +AppConfig[:solr_url] = "http://localhost:8090" +AppConfig[:indexer_url] = "http://localhost:8091" # NEW TO 1.1.0 +AppConfig[:public_url] = "http://localhost:8081" +``` + +In most cases the default value for `indexer_url` will blend in seamlessly without you needing to take any action. However, if you modified the original values in your `config.rb` file you may need to update it. Examples: + +**You use a different ports sequence** + +```ruby +AppConfig[:indexer_url] = "http://localhost:9091" +``` + +**You run multiple ArchivesSpace instances on a single host** + +Under this deployment scenario you would have changed port numbers for some (or all) instances in each `config.rb` file, so set the `indexer_url` for each instance as described above. + +**You include hostnames** + +```ruby +AppConfig[:indexer_url] = "http://yourhostname:8091" +``` + +## Clustering + +--- + +In a clustered configuration you may need to edit `instance_[server hostname].rb` files: + +```ruby +{ + ... + :indexer_url => "http://[localhost|yourhostname]:8091", +} +``` + +--- diff --git a/src/content/docs/fr/administration/upgrading_1_1_1.md b/src/content/docs/fr/administration/upgrading_1_1_1.md new file mode 100644 index 0000000..1df7953 --- /dev/null +++ b/src/content/docs/fr/administration/upgrading_1_1_1.md @@ -0,0 +1,58 @@ +--- +title: Upgrading to 1.1.1 +description: Instructions on how to resequence archival object and digital object components within the resource tree and details on a plugin to make PDFs available in the public interface. +--- + +Additional upgrade considerations specific to this release. Refer to the [upgrade documentation](/administration/upgrading) for the standard instructions that apply in all cases. + +## Resequencing of Archival Object & Digital Object Component trees + +--- + +There have been some scenarios in which archival objects and digital object components lose +some of the information used to order their hierarchy. This can result in issues in creation, +editing, or moving items in the tree, since there are database constraints to ensure uniqueness +of certain metadata elements. + +In order to ensure data integrity, there is now method to resequence the trees. This will +not reorder or edit the elements, but simply rebuild all the technical metadata used to establish +the ordering. + +To run the resequencing process, edit the config/config.rb file to have this line: + +```ruby +AppConfig[:resequence_on_startup] = true +``` + +and restart ArchivesSpace. This will trigger a rebuilding process after the application has +started. It's advised to let this rebuild process run its course prior to editing records. +This duration depends on the size of your database, which can take seconds ( for databases with +few Archival and Digital Objects ) to hours ( for databases with hundreds of thousands of records ). +Check your log file to see how the process is going. When it has finished, you should see the application +return to normal operation, generally with only indexer updates being recorded in the log file. + +After you've started ArchivesSpace, be sure to change the config.rb file to have the :resequence_on_startup +set to "false", since you will not need to run this process on every restart. + +## Export PDFs in the Public Interface + +--- + +A common request has been to have a PDF version of the EAD exported in the public application. +This has been a bit problematic, since EAD export has a rather large resource hit on the +database, which is only increased by the added process of PDF creation. We are currently +redesigning part of the ArchivesSpace backend to make PDF creation more user-friendly by +establishing a queue system for exports. + +In the meantime, Mark Cooper at Lyrasis has made a [ Public Metadata Formats plugin ](https://github.com/archivesspace-deprecated/aspace-public-formats) +that exposes certain metadata formats and PDFs in the public UI. This plugin has been included +in this release, but you will need to configure it to expose which formats you would like +to have exposed. Please read the plugin documentation on how to configure this. + +PLEASE NOTE: +Exporting large EAD resources with this plugin will most likely cause some problems. Long requests +will time out, since the server does not want to waste resources on long-running processes. +In addition, a large number of requests for PDFs can cause an increased load on the server. +Please be aware of these plugin issues and limitations before enabling it. + +--- diff --git a/src/content/docs/fr/administration/upgrading_1_5_0.md b/src/content/docs/fr/administration/upgrading_1_5_0.md new file mode 100644 index 0000000..fb5662a --- /dev/null +++ b/src/content/docs/fr/administration/upgrading_1_5_0.md @@ -0,0 +1,147 @@ +--- +title: Upgrading to 1.5.0 +description: Upgrade instructions for upgrading from ArchivesSpace 1.4.2 or lower to 1.5.0, including details on the newest container management feature. +--- + +Additional upgrade considerations specific to this release, which also apply to upgrading from 1.4.2 or lower to any version through 2.0.1. Refer to the [upgrade documentation](/administration/upgrading) for the standard instructions that apply in all cases. + +## General overview + +The upgrade process to the new data model in 1.5.0 requires considerable data transformation and it is important for users to review this document to understand the implications and possible side-effects. + +A quick overview of the steps are: + +1. Review this document and understand how the upgrade will impact your data, paying particular attention to the [Preparation section](#preparation). +2. [Backup your database](/administration/backup). +3. No, really, [backup your database](/administration/backup). +4. It is suggested that [users start with a new solr index](/administration/indexes). To do this, delete the data/solr_index/index directory and all files in the data/indexer_state directory. The embedded version of Solr has been upgraded, which should result in a much more compact index size. +5. Follow the standard [upgrading instructions](/administration/upgrading). Important to note: The setup-database.sh|bat script will modify your database schema, but it will not move the data. If you are currently using the container management plugin you will need to remove it from the list of plugins in your config file prior to starting ArchivesSpace. +6. Start ArchivesSpace. When 1.5.0 starts for the first time, a conversion process will kick off and move the data into the new table structure. **During this time, the application will be unavailable until it completes**. Duration depends on the size of your data and server resources, with a few minutes for very small databases to several hours for very large ones. +7. When the conversion is done, the web application will start and the indexer will rebuild your index. Performance might be slower while the indexer runs, depending on your server environment and available resources. +8. Review the [output of the conversion process](#conversion) following the instructions below. How long it takes for the report to load will depend on the number of entries included in it. + +## Preparing for and Converting to the New Container Management Functionality + +With version 1.5.0, ArchivesSpace is adopting a new data model that will enable more capable and efficient management of the containers in which you store your archival materials. To take advantage of this improved functionality: + +- Repositories already using ArchivesSpace as a production application will need to upgrade their ArchivesSpace applications to the version 1.5.0. (This upgrade / conversion must be done to take advantage of any other new features / bug fixes in ArchivesSpace 1.5.0 or later versions.) +- Repositories not yet using ArchivesSpace in production but needing to migrate data from the Archivists’ Toolkit or Archon will need to migrate their data to version 1.4.2 of ArchivesSpace or earlier and then upgrade that version to version 1.5.0. (This can be done when your repository is ready to migrate to ArchivesSpace.) +- Repositories not yet using ArchivesSpace in production and not needing to migrate data from the Archivists’ Toolkit or Archon can start using Archivists 1.5.0 without the need of upgrading. (People in this situation do not need to read any further.) + +Converting the container data model in version 1.4.2 and earlier versions of ArchivesSpace to the 1.5.0 version has some complexity and may not accommodate all the various ways in which container information has been recorded by diverse repositories. As a consequence, upgrading from a pre-1.5.0 version of ArchivesSpace requires planning for the upgrade, reviewing the results, and, possibly, remediating data either prior to or after the final conversion process. Because of all the variations in which container information can be recorded, it is impossible to know all the ways the data of repositories will be impacted. For this reason, **all repositories upgrading their ArchivesSpace to version 1.5.0 should do so with a backup of their production ArchivesSpace instance and in a test environment.** A conversion may only be undone by reverting back to the source database. + +## Frequently Asked Questions + +_How will my data be converted to the new model?_ + +When your installation is upgraded to 1.5.0, the conversion will happen as part of the upgrade process. + +_Can I continue to use the current model for containers and not convert to the new model?_ + +Because it is such a substantial improvement (see the [new features list](#new-features-in-150) below), the new model is required for all using ArchivesSpace 1.5.0 and higher. The only way to continue using the current model is to never upgrade beyond 1.4.2. + +_What if I’m already using the container management plugin made available to the community by Yale University?_ + +Conversion of data created using the Yale container management plugin, or a local adaptation of the plugin, will also happen as part of the process of upgrading to 1.5.0. Some steps will be skipped when they are not needed. At the end of the process, the new container data model will be integrated into your ArchivesSpace and will not need to be loaded or maintained as a plugin. + +Those currently running the container management plugin will need to remove the container management plugin from the list in your config file prior to starting the conversion or a validation name error will occur. + +_I haven’t moved from Archivists’ Toolkit or Archon yet and am planning to use the associated migration tool. Can I migrate directly to 1.5.0?_ + +No, you must migrate to 1.4.2 or earlier versions and then upgrade your installation to 1.5.0 according to the instructions provided here. + +_What changes are being made to the previous model for containers?_ + +The biggest change is the new concept of top containers. A top container is the highest level container in which a particular instance is stored. Top containers are in some ways analogous to the current Container 1, but broken out from the entire container record (child and grandparent container records). As such, top containers enable more efficient recording and updating of the highest level containers in your collection. + +_How does ArchivesSpace determine what is a top container?_ + +During the conversion, ArchivesSpace will find all the Container 1s in your current ArchivesSpace database. It will then evaluate them as follows: + +- If containers have barcodes, one top container is created for each unique Container 1 barcode. +- If containers do not have barcodes, one top container is created for each unique combination of container 1 indicator and container type 1 within a resource or accession. +- Once a top container is created, additional instance records for the same container within an accession or resource will be linked to that top container record. + +## Preparation + +_What can I do to prepare my ArchivesSpace data for a smoother conversion to top containers?_ + +- If your Container 1s have unique barcodes, you do not need to do anything except verify that your data is complete and accurate. You should run a preliminary conversion as described in the Conversion section and resolve any errors. +- If your Container 1s do not have barcodes, but have a nonduplicative container identifier sequence within each accession or resource (e.g. Box 1, Box 2, Box 3), or the identifiers are only reused within an accession or resource for different types of containers (for example, you have a Box 1 through 10 and an Oversize Box 1 through 3) you do not need to do anything except verify that your data is complete and accurate. You should run a preliminary conversion as described in the Conversion section and resolve any errors. +- If your Container 1s do not have barcodes and you have parallel numbering sequences, where the same indicators and types are used to refer to different containers within the same accession or resource within some or all accessions or resources (for example, you have a Box 1 in series 1 and a different Box 1 in series 5) you will need to find a way to uniquely identify these containers. One option is to run this [barcoder plugin](https://github.com/archivesspace-plugins/barcoder) for each resource to which this applies. The barcoder plugin creates barcodes that combine the ID of the highest level archival object ancestor with the container 1 type and indicator. (The barcoder plugin is designed to run against one resource at a time, instead of against all resources, because not all resources in a repository may match this condition.) Once you’ve differentiated your containers with parallel number sequences, you should run a preliminary conversion as described in the Conversion section and resolve any errors. + +You do not need to make any changes to Container 2 fields or Container 3 fields. Data in these fields will be converted to the new Child and Grandchild container fields that map directly to these fields. + +If you use the current Container Extent fields, these will no longer be available in 1.5.0. Any data in these fields will be migrated to a new Extent sub-record during the conversion. You can evaluate whether this data should remain in an extent record or if it belongs in a container profile or other fields and then move it accordingly after the conversion is complete. + +_I have EADs I still need to import into ArchivesSpace. How can I get them ready for this new model?_ + +If you have a box and folder associated with a component (or any other hierarchical relationship of containers), you will need to add identifiers to the container element so that the EAD importer knows which is the top container. If you previously used Archivists' Toolkit to create EAD, your containers probably already have container identifiers. If your container elements do not have identifiers already, Yale University has made available an [XSLT transformation file](https://github.com/YaleArchivesSpace/xslt-files/blob/master/EAD_add_IDs_to_containers.xsl) to add them. You will need to run it before importing the EAD file into ArchivesSpace. + +## Conversion + +When upgrading from 1.4.2 (and earlier versions) to 1.5.0, the container conversion will happen as part of the upgrade process. You will be able to follow its progress in the log. Instructions for upgrading from a previous version of ArchivesSpace are available at [upgrade documentation](/administration/upgrading). + +Because this is a major change in the data model for this portion of the application, running at least one test conversion is very strongly recommended. Follow these steps to run the upgrade/conversion process: + +- Create a backup of your ArchivesSpace instance to use for testing. **IT IS ESSENTIAL THAT YOU NOT RUN THIS ON A PRODUCTION INSTANCE AS THE CONVERSION CHANGES YOUR DATA, and THE CHANGES CANNOT BE UNDONE EXCEPT BY REVERTING TO A BACKUP VERSION OF YOUR DATA PRIOR TO RUNNING THE CONVERSION.** +- Follow the upgrade instructions to unpack a fresh copy of the v 1.5.0 release made available for testing, copy your configuration and data files, and transfer your locales. +- **It is recommended that you delete your Solr index files to start with a fresh index** We are upgrading the version of Solr that ships with the application, and the upgrade will require a total reindex of your ArchivesSpace data. To do this, delete the data/solr_index/index directory and the files in data/indexer_state. +- Follow the upgrade instructions to run the database migrations. As part of this step, your container data will be converted to the new data model. You can follow along in the log. Windows users can open the archivesspace.out file in a tool like Notepad ++. Mac users can do a tail –f logs/archivesspace.out to get a live update from the log. +- When the test conversion has been completed, the log will indicate "Completed: existing containers have been migrated to the new container model." + +![Image of Conversion Log](../../../../images/ConversionLog.png) + +- Open ArchivesSpace via your browser and login. + Retrieve the container conversion error report from the Background Jobs area: +- Select Background Jobs from the Settings menu. + +![Image of Background Jobs](../../../../images/BackgroundJobs.png) + +- The first item listed under Archived Jobs after completing the upgrade should be container_conversion_job. Click View. + +![Image of Background Jobs List](../../../../images/BackgroundJobsList.png) + +- Under Files, click File to download a CSV file with the errors and a brief explanation. + +![Image of Files](../../../../images/Files.png) + +![Image of Error Report](../../../../images/ErrorReport.png) + +- Go back to your source data and correct any errors that you can before doing another test conversion. +- When the error report shows no errors, or when you are satisfied with the remaining errors, your production instance is ready to be upgraded. +- When the final upgrade/conversion is complete, you can move ArchivesSpace version 1.5.0 into production. + +_What are some common errors or anomalies that will be flagged in the conversion?_ + +- A container with a barcode has different indicators or types in different records. +- A container with a particular type and indicator sometimes has a barcode and sometimes doesn’t. +- A container is missing a type or indicator. +- Container levels are skipped (for example, there is a Container 1 and a Container 3, but no Container 2). +- A container has multiple locations. + +The conversion process can resolve some of these errors for you by supplying or deleting values as it deems appropriate, but for the most control over the process you will most likely want to resolve such issues yourself in your ArchivesSpace database before converting to the new container model. + +_Are there any known conversion issues?_ + +Due to a change in the ArchivesSpace EAD importer in 2015, some EADs with hierarchical containers not designated by a @parent attribute were turned into multiple instance records. This has since been corrected in the application, but we are working on a plugin (now available at [Instance Joiner Plugin](https://github.com/archivesspace-plugins/instance_joiner) that will enable you to turn these back into single instances so that subcontainers are not mistakenly turned into top containers. + +## New features in 1.5.0 + +**Top containers replace Container 1s.** Unlike Container 1s in the current version of ArchivesSpace, top containers in the upcoming version can be defined once and linked many times to various archival objects, resources, and accessions. + +**The ability to create container profiles and associate them with top containers.** Optional container profiles allow you to track information about the containers themselves, including dimensions. + +**Extent calculator.** In conjunction with container profiles, the new extent calculator allows you to easily see extents for accessions, resources, or resource components. Optionally, you can use the calculator to generate extent records for an accession, resource, or resource component. + +**Bulk operations for containers.** The Manage Top Containers area provides more efficient ways to work with multiple containers, including the ability to add or edit barcodes, change locations, and delete top containers in bulk. + +**The ability to "share" boxes across collections in a meaningful way.** You can define top containers separately from individual accessions and resources and access them from multiple accession and resource records. For example, this might be helpful for recording information about an oversize box that contains items from many collections. + +**The ability to store data that will help you synchronize between ArchivesSpace and item records in your ILS.** If your institution creates item records in its ILS for containers, you can now record that information within ArchivesSpace as well. + +**The ability to store data about the restriction status of material associated with a container.** You can now see at a glance whether any portion of the contents of a container is restricted. + +**Machine-actionable restrictions.** You will now have the ability to associate begin and end dates with "conditions governing access" and "conditions governing use" Notes. You'll also be able to associate a local restriction type for non-time-bound restrictions. This gives the ability to better manage and re-describe expiring restrictions. + +For more information on using the new features, consult the user manual, particularly the new section titled Managing Containers (available late April 2016). diff --git a/src/content/docs/fr/administration/upgrading_2_1_0.md b/src/content/docs/fr/administration/upgrading_2_1_0.md new file mode 100644 index 0000000..05b8e8e --- /dev/null +++ b/src/content/docs/fr/administration/upgrading_2_1_0.md @@ -0,0 +1,30 @@ +--- +title: Upgrading to 2.1.0 +description: Instructions on upgrading to ArchivesSpace 2.1.0 if coming from 1.4.2 or below, Archivists' Toolkit or Archon, or if using an external Solr server, in addition to notes on rights statement data migration. +--- + +Additional upgrade considerations specific to this release. Refer to the [upgrade documentation](/administration/upgrading) for the standard instructions that apply in all cases. + +:::note +These considerations also apply when upgrading to any version past 2.1.0 from a version prior to 2.1.0. +::: + +## For those upgrading from 1.4.2 and lower + +Following the merge of the Container Management Plugin in 1.5.0, ArchivesSpace still retained the old container model and had a number of dependencies on it. This imposed unnecessary complexity and some performance degradation on the system. + +In this release all references to the old container model have been removed and the parts of the application that were dependent on it (for example, Imports and Exports) have been refactored to use the new container model. + +A consequence of this change is that if you are upgrading from ArchivesSpace version of 1.4.2 or lower, you will need to first upgrade to any version between 1.5.0 and 2.0.1 to run the container conversion. You will then be able to upgrade to 2.1.0. If you are already using any version of ArchivesSpace between 1.5.0 and 2.0.1, you will be able to upgrade directly to 2.1.0. + +## For those needing to migrate data from Archivists' Toolkit or Archon using the migration tools + +The migration tools are currently supported through version 1.4.2 only. If you want to migrate data to ArchivesSpace using one of these tools, you must migrate it to 1.4.2. From there you can follow the instructions for those upgrading from 1.4.2 and lower. + +## Data migrations in this release + +The rights statements data model has changed in 2.1.0. If you currently use rights statements, your data will be converted to the new model during the setup-database step of the upgrade process. We strongly urge you to backup your database and run at least one test upgrade before putting 2.1.0 into production. + +## For those using an external Solr server + +The index schema has changed with 2.1.0. If you are using an external Solr server, you will need to update the [schema.xml](https://github.com/archivesspace/archivesspace/blob/master/solr/schema.xml) with the newer version. If you are using the default Solr index that ships with ArchivesSpace, no action is needed. diff --git a/src/content/docs/fr/administration/windows.md b/src/content/docs/fr/administration/windows.md new file mode 100644 index 0000000..a34b237 --- /dev/null +++ b/src/content/docs/fr/administration/windows.md @@ -0,0 +1,60 @@ +--- +title: Running as a Windows service +description: Instructions on how to set up ArchivesSpace as a Windows service. +--- + +Running ArchivesSpace as a Windows service requires some additional configuration. + +You can use Apache [procrun](http://commons.apache.org/proper/commons-daemon/procrun.html) to configure ArchivesSpace to run as a Windows service. We have provided a service.bat script that will attempt to configure procrun for you (under `launcher\service.bat`). + +To run this script, first you need to [download procrun](http://www.apache.org/dist/commons/daemon/binaries/windows/). +Extract the files and copy the prunsrv.exe and prunmgr.exe to your ArchivesSpace directory. + +To find the path to Java, "Start" > "Control Panel" > "Java", Select "Java" tab. You'll see the path there. It will look something like `C:\Program Files (x86)\Java` + +You also need to be sure that Java is in your system path and also to create `JAVA_HOME` as a global environment variable. +To add Java to your path, edit you %PATH% environment variable to include the directory of your java executable ( it will be something like `C:\Program Files (x86)\Java` ). To add `JAVA_HOME`, add a new system variable and put the directory where java was installed ( something like `C:\Program Files (x86)\Java` ). + +Environment variables can be found by going to "Start" > "Control Panel", search for environment. Click "edit the system environment variables". In the section "System Variables", find the `PATH` environment variable and select it. Click Edit. If the `PATH` environment variable does not exist, click New. In the Edit System Variable (or New System Variable) window, specify the value of the `PATH` environment variable. Click OK. Close all remaining windows by clicking OK. Do the same for `JAVA_HOME`. + +Before setting up the ArchivesSpace service, you should also [configure ArchivesSpace to run against MySQL](/provisioning/mysql). +Be sure that the MySQL connector jar file is in the lib directory, in order for +the service setup script to add it to the application's classpath. + +Lastly, for the service to shutdown cleanly, uncomment and change these lines in +config/config.rb: + +```ruby +AppConfig[:use_jetty_shutdown_handler] = true +AppConfig[:jetty_shutdown_path] = "/xkcd" +``` + +This enables a shutdown hook for Jetty to respond to when the shutdown action +is taken. + +You can now execute the batch script from your ArchivesSpace root directory from +the command line with `launcher\service.bat`. This will configure the service and +provide two executables: `ArchivesSpaceService.exe` (the service) and +`ArchivesSpaceServicew.exe` (a GUI monitor) + +There are several options to launch the service. The easiest is to open the GUI +monitor and click "Launch". + +Alternatively, you can start the GUI monitor and minimize it in your +system tray with: + +```shell +ArchivesSpaceServicew.exe //MS// +``` + +To execute the service from the command line, you can invoke: + +```shell +ArchivesSpaceService.exe //ES// +``` + +Log output will be placed in your ArchivesSpace log directory. + +Please see the [procrun +documentation](http://commons.apache.org/proper/commons-daemon/procrun.html) +for more information. diff --git a/src/content/docs/fr/api/index.md b/src/content/docs/fr/api/index.md new file mode 100644 index 0000000..3f79dc2 --- /dev/null +++ b/src/content/docs/fr/api/index.md @@ -0,0 +1,486 @@ +--- +title: Working with the API +description: General information about working with the API, including authentication, get, and post requests with examples. +--- + +:::tip +This documentation provides general information on working with the API. For detailed documentation of specific endpoints, see the [API reference](http://archivesspace.github.io/archivesspace/api/), which is maintained separately. +::: + +## Authentication + +Most actions against the backend require you to be logged in as a user +with the appropriate permissions. By sending a request like: + + POST /users/admin/login?password=login + +your authentication request will be validated, and a session token +will be returned in the JSON response for your request. To remain +authenticated, provide this token with subsequent requests in the +`X-ArchivesSpace-Session` header. For example: + + X-ArchivesSpace-Session: 8e921ac9bbe9a4a947eee8a7c5fa8b4c81c51729935860c1adfed60a5e4202cb + +Since not all backend/API end points require authentication, it is best to restrict access to port 8089 to only IP addresses you trust. Your firewall should be used to specify a range of IP addresses that are allowed to call your ArchivesSpace API endpoint. This is commonly called whitelisting or allowlisting. + +### Example requests using CURL + +Send request to authenticate: + +```shell +curl -s -F password="admin" "http://localhost:8089/users/admin/login" +``` + +This will return a JSON response that includes something like the following: + +<!-- prettier-ignore --> +```json +{ + "session":"9528190655b979f00817a5d38f9daf07d1686fed99a1d53dd2c9ff2d852a0c6e", + .... +} +``` + +It’s a good idea to save the session key as an environment variable to use for later requests: + +```shell +#Mac/Unix terminal +export SESSION="9528190655b979f00817a5d38f9daf07d1686fed99a1d53dd2c9ff2d852a0c6e" + +#Windows Command Prompt +set SESSION="9528190655b979f00817a5d38f9daf07d1686fed99a1d53dd2c9ff2d852a0c6e" + +#Windows PowerShell +$env:SESSION="9528190655b979f00817a5d38f9daf07d1686fed99a1d53dd2c9ff2d852a0c6e" +``` + +Now you can make requests like this: + +```shell +curl -H "X-ArchivesSpace-Session: $SESSION" "http://localhost:8089/repositories/2/resources/1 +``` + +## CRUD + +The ArchivesSpace API provides CRUD-style interactions for a number of +different "top-level" record types. Working with records follows a +fairly standard pattern: + + # Get a paginated list of accessions from repository '123' + GET /repositories/123/accessions?page=1 + + # Create a new accession, returning the ID of the new record + POST /repositories/123/accessions + {... a JSON document satisfying JSONModel(:accession) here ...} + + # Get a single accession (returned as a JSONModel(:accession) instance) using the ID returned by the previous request + GET /repositories/123/accessions/456 + + # Update an existing accession + POST /repositories/123/accessions/456 + {... a JSON document satisfying JSONModel(:accession) here ...} + +## Performing API requests + +### GET requests + +#### Resolving associated records + +The :resolve parameter is a way to tell ArchivesSpace to attach the full object to these refs; it is passed in as an +array of keys to "prefetch" in the returned JSON. The object is included in the ref under a \_resolved key. + +For example, to find an archival object by a ref_id and return the found archival object, you can attach +`resolve[]: "archival_objects"` within your request. + +##### Shell Example + +> ```shell +> curl -s -F password="admin" "http://localhost:8089/users/admin/login" +> # Replace "admin" with your password and "http://localhost:8089/users/admin/login" with your ASpace API URL +> # followed by "/users/{your_username}/login" +> +> set SESSION="session_id" +> # If using a unix-like shell, replace set with export +> +> curl -H "X-ArchivesSpace-Session: $SESSION" // +> "http://localhost:8089/repositories/:repo_id:/find_by_id/archival_objects?ref_id[]=hello_im_a_ref_id;resolve[]=archival_objects" +> # Replace "http://localhost:8089" with your ASpace API URL, :repo_id: with the repository ID, +> # "hello_im_a_ref_id" with the ref ID you are searching for, and only add +> # "resolve[]=archival_objects" if you want the JSON for the returned record - otherwise, it will return the +> # record URI only +> ``` + +##### Python Example + +> ```python +> from asnake.client import ASnakeClient # import the ArchivesSnake client +> +> client = ASnakeClient(baseurl="http://localhost:8089", username="admin", password="admin") +> # Replace "http://localhost:8089" with your ArchivesSpace API URL and "admin" for your username and password +> +> client.authorize() # authorizes the client +> +> find_ao_refid = client.get("repositories/:repo_id:/find_by_id/archival_objects", +> params={"ref_id[]": "hello_im_a_ref_id", +> "resolve[]": "archival_objects"}) +> # Replace :repo_id: with the repository ID, "hello_im_a_ref_id" with the ref ID you are searching for, and only add +> # "resolve[]": "archival_objects" if you want the JSON for the returned record - otherwise, it will return the +> # record URI only +> +> print(find_ao_refid.json()) +> # Output (dict): {'archival_objects': [{'ref': '/repositories/2/archival_objects/708425', '_resolved':...}]} +> ``` + +#### Requests for paginated results + +Endpoints that represent groups of objects, rather than single objects, tend to be paginated. Paginated endpoints are called out in the documentation as special, with some version of the following content appearing: +This endpoint is paginated. :page, :id_set, or :all_ids is required + + Integer page – The page set to be returned + Integer page_size – The size of the set to be returned ( Optional. default set in AppConfig ) + Comma separated list id_set – A list of ids to request resolved objects ( Must be smaller than default page_size ) + Boolean all_ids – Return a list of all object ids + +These endpoints support some or all of the following: + + paged access to objects (via :page) + listing all matching ids (via :all_ids) + fetching specific known objects via their database ids (via :id_set) + +##### Shell Example + +> ```shell +> curl -s -F password="admin" "http://localhost:8089/users/admin/login" +> # Replace "admin" with your password and "http://localhost:8089/users/admin/login" with your ASpace API URL +> # followed by "/users/{your_username}/login" +> +> set SESSION="session_id" +> # If using a unix-like shell, replace set with export +> +> # For all archival objects, use all_ids +> curl -H "X-ArchivesSpace-Session: $SESSION" // +> "http://localhost:8089/repositories/2/archival_objects?all_ids=true" +> +> # For a set of archival objects, use id_set +> curl -H "X-ArchivesSpace-Session: $SESSION" // +> "http://localhost:8089/repositories/2/archival_objects?id_set=707458&id_set=707460&id_set=707461" +> +> # For a page of archival objects, use page and page_size +> "http://localhost:8089/repositories/2/archival_objects?page=1&page_size=10" +> ``` + +> Python example needed + +#### Working with long results sets + +When working with search results using page and page_size parameters, many results can be returned and managing those +results can be difficult. See the Python example below for demonstrating how to take a large result set and iterating +through it to search for archival objects from a paginated result. + +##### Python Example + +> ```python +> from asnake.client import ASnakeClient # import the ArchivesSnake client +> +> client = ASnakeClient(baseurl="http://localhost:8089", username="admin", password="admin") +> # Replace http://localhost:8089 with your ArchivesSpace API URL and admin for your username and password +> +> client.authorize() # authorizes the client +> +> # To get a page of archival objects with a set page size, use "page" and "page_size" parameters +> get_repo_aos_pages = client.get("repositories/2/archival_objects", params={"page": 1, "page_size": 10}) +> # Replace 2 for your repository ID. Find this in the URI of your archival object on the bottom right of the +> # Basic Information section in the staff interface +> +> print(get_repo_aos_pages.json()) +> # Output (dictionary): {'first_page': 1, 'last_page': 26949, 'this_page': 1, 'total': 269488, +> # 'results': [{'lock_version': 1, 'position': 0,...]...} +> +> result_count = len(get_repo_aos_pages.json()) # Get us the count of results back +> for result in get_repo_aos_pages.json(): +> json_info = json.loads(result["json"]) +> for key, value in json_info.items(): +> id_match = id_field_regex.match(key) +> ``` + +#### Search requests + +A number of routes in the ArchivesSpace API are designed to search for content across all or part of the records in the +application. These routes make use of Solr, a component bundled with ArchivesSpace and used to provide full text search +over records. + +The search routes present in the application as of this time are: + +- Search this archive +- Search across repositories +- Search this repository +- Search across subjects +- Search for top containers +- Search across location profiles + +Search routes take quite a few different parameters, most of which correspond directly to Solr query parameters. The +most important parameter to understand is q, which is the query sent to Solr. This query is made in Lucene query +syntax. The relevant docs are in the Solr Ref Guide's [The Standard Query Parser](https://solr.apache.org/guide/6_6/the-standard-query-parser.html#the-standard-query-parser) webpage. + +To limit a search to records of a particular type or set of types, you can use the 'type' parameter. This is only +relevant for search endpoints that aren't limited to specific types. Note that type is expected to be a list of types, +even if there is only one type you care about. + +##### Notes on search routes and results + +ArchivesSpace represents records as JSONModel Objects - this is what you get from and send to the system. + +SOLR takes these records, and stores "documents" BASED ON these JSONModel objects in a searchable index. + +Search routes query these documents, NOT the records themselves as stored in the database and represented by JSONModel. + +JSONModel objects and SOLR documents are similar in some ways: + +- Both SOLR documents and JSONModel Objects are expressed in JSON +- In general, documents will always contain some subset of the JSONModel object they represent + +But they also differ in quite a few important ways: + +- SOLR documents don't necessarily have all fields from a JSONModel object +- SOLR documents do not automatically contain nested JSONModel Objects +- SOLR documents can have fields defined that are arbitrary "search representations" of fields in associated records, + or combinations of fields in a record +- SOLR documents don't have a jsonmodel_type field - the jsonmodel_type of the record is stored as primary_type in SOLR + +How do I get the actual JSONModel from a search document? + +In ArchivesSpace, SOLR documents all have a field json, which contains the JSONModel Object the document represents as +a string. You can use a JSON library to parse this string from the field, for example the json library in Python. + +##### Shell Example + +> ```shell +> +> # auto-generated example +> curl -H "X-ArchivesSpace-Session: $SESSION" \ +> "http://localhost:8089/search/repositories?q=&aq=%7B%22jsonmodel_type%22%3D%3E%22advanced_query%22%2C+%22query%22%3D%3E%7B%22jsonmodel_type%22%3D%3E%22boolean_query%22%2C+%22op%22%3D%3E%22AND%22%2C+%22subqueries%22%3D%3E%5B%7B%22jsonmodel_type%22%3D%3E%22date_field_query%22%2C+%22negated%22%3D%3Efalse%2C+%22comparator%22%3D%3E%22empty%22%2C+%22field%22%3D%3E%22QSUC205%22%2C+%22value%22%3D%3E%222018-03-26%22%7D%5D%7D%7D&type%5B%5D=&sort=&facet%5B%5D=&facet_mincount=1&filter=%7B%22jsonmodel_type%22%3D%3E%22advanced_query%22%2C+%22query%22%3D%3E%7B%22jsonmodel_type%22%3D%3E%22boolean_query%22%2C+%22op%22%3D%3E%22AND%22%2C+%22subqueries%22%3D%3E%5B%7B%22jsonmodel_type%22%3D%3E%22date_field_query%22%2C+%22negated%22%3D%3Efalse%2C+%22comparator%22%3D%3E%22empty%22%2C+%22field%22%3D%3E%22QSUC205%22%2C+%22value%22%3D%3E%222018-03-26%22%7D%5D%7D%7D&filter_query%5B%5D=&exclude%5B%5D=&hl=BooleanParam&root_record=&dt=&fields%5B%5D=" +> +> # auto-generated example +> curl -H 'Content-Type: text/json' -H "X-ArchivesSpace-Session: $SESSION" \ +> "http://localhost:8089/search/repositories" \ +> -d '{ +> "aq": { +> "jsonmodel_type": "advanced_query", +> "query": { +> "jsonmodel_type": "boolean_query", +> "op": "AND", +> "subqueries": [ +> { +> "jsonmodel_type": "date_field_query", +> "negated": false, +> "comparator": "empty", +> "field": "QSUC205", +> "value": "2018-03-26" +> } +> ] +> } +> }, +> "facet_mincount": "1", +> "filter": { +> "jsonmodel_type": "advanced_query", +> "query": { +> "jsonmodel_type": "boolean_query", +> "op": "AND", +> "subqueries": [ +> { +> "jsonmodel_type": "date_field_query", +> "negated": false, +> "comparator": "empty", +> "field": "QSUC205", +> "value": "2018-03-26" +> } +> ] +> } +> }, +> "hl": "BooleanParam" +> }' +> ``` + +### POST requests + +#### Updating existing records + +For updating existing records, it's recommended to first do a GET request for the record you want to update. This +ensures that the data you are updating is the most accurate and reduces the chance of inadvertently removing data that +was there previously but may be lost if the data is not included in the subsequent update. After getting the original +record data, you can update it as needed and then do a POST request with the updated data. Make sure that the updated +data is JSON formatted and is passed either through the `-d` or `--data` parameter or `json` parameter if using +ArchivesSnake. + +##### Shell Example + +> ```shell +> curl -s -F password="admin" "http://localhost:8089/users/admin/login" +> # Replace "admin" with your password and "http://localhost:8089" with your ASpace API URL followed by +> # "/users/{your_username}/login" +> +> set SESSION="session_id" +> # If using a unix-like shell, replace set with export +> +> curl -H 'Content-Type: text/json' -H "X-ArchivesSpace-Session: $SESSION" \\ +> "http://localhost:8089/repositories/:repo_id:/groups/:group_id:" \\ +> -d '{"group_code": "test-group_managers", +> "lock_version": 4, +> "description": "Test group managers of the Manuscripts repository", +> "jsonmodel_type": "group", +> "member_usernames": [ +> "manager", "advance"]}' +> # Replace http://localhost:8089 with your ArchivesSpace API URL, :repo_id: with the repository ID number, +> # :group_id: with the group ID number you want to update, and the data found after -d with the data you want +> # updating the group. Be sure to include "lock_version" and the most recent number for it. You can find the +> # most recent lock_version by submitting a get request, like so: curl -H "X-ArchivesSpace-Session: $SESSION" \ +> # "http://localhost:8089/repositories/:repo_id:/groups/:group_id:" +> +> # Output: +> # {"status":"Updated","id":23,"lock_version":5,"stale":null,"uri":"/repositories/2/groups/23","warnings":[]} +> ``` + +##### Python Example + +> ```python +> from asnake.client import ASnakeClient # import the ArchivesSnake client +> client = ASnakeClient(baseurl="http://localhost:8089", username="admin", password="admin") +> # replace http://localhost:8089 with your ArchivesSpace API URL and admin for your username and password +> +> client.authorize() # authorizes the client +> +> get_user_group = client.get("repositories/:repo_id:/groups/:group_id:").json() +> # Retrieve the data from the group you are trying to update. Replace :repo_id: with the repository ID number and +> # :group_id: with the group ID number you want to update +> +> get_user_group["member_usernames"].append("advance") +> # An example of how to modify a group record. For a list of all the fields you can update, +> # print(get_user_group). Here we append a new user 'advance' to the list of users associated with this group. +> +> update_user_group = get_user_group +> # Assign the newly updated get_user_group to update_user_group - to help make it clearer to see. +> +> update_status = client.post("repositories/:repo_id:/groups/:group_id:", json=update_user_group) +> # Replace :repo_id: with the repository ID number and :group_id: with the group ID number you want to update +> +> print(update_status.json()) +> # Output: +> # {'status': 'Updated', 'id': 48, 'lock_version': 1, 'stale': None, 'uri': '/repositories/2/groups/48', +> # 'warnings': []} +> ``` + +#### Creating new records + +When creating new records, it's recommended to do a GET request on the type of record you are wanting to create. This +example record is useful for seeing what fields are included for that specific record. Not all fields are required, for +example, the `created` and `modified` fields are not necessary when creating a new record, since those fields are +handled automatically. Others, such as `title` and `jsonmodel_type` are required. + +After examining an existing record for reference, craft your JSON-formatted data and make a POST request. Make sure +that the new record is passed either through the `-d` or `--data` parameter or `json` parameter if using ArchivesSnake. + +##### Shell Example + +> ```shell +> # Create a new user group within the SHELL +> curl -s -F password="admin" "http://localhost:8089/users/admin/login" +> # Replace "admin" with your password and "http://localhost:8089" with your ASpace API URL followed by +> # "/users/{your_username}/login" +> +> set SESSION="session_id" +> # If using a unix-like shell, replace set with export +> +> curl -H "X-ArchivesSpace-Session: $SESSION" "http://localhost:8089/repositories/:repo_id:/groups/" \\ +> -d '{"group_code": "test-group_managers", +> "description": "Test group managers of the Manuscripts repository", +> "jsonmodel_type": "group"}' +> # Replace "http://localhost:8089" with your ASpace API URL, :repo_id: with the repository ID, and +> # the data found in -d with the metadata you want to create the new user group. +> +> # Output +> # {"status":"Created","id":24,"lock_version":0,"stale":null,"uri":"/repositories/2/groups/24","warnings":[]} +> ``` + +##### Python Example + +> ```python +> # Create a new user group using Python and ArchivesSnake +> from asnake.client import ASnakeClient # import the ArchivesSnake client +> +> client = ASnakeClient(baseurl="http://localhost:8089", username="admin", password="admin") +> # replace http://localhost:8089 with your ArchivesSpace API URL and admin for your username and password +> +> client.authorize() # authorizes the client +> +> new_group = { +> "group_code": "test-group_managers", +> "description": "Test group managers of the Manuscripts repository", +> "jsonmodel_type": "group", +> "member_usernames": [ +> "manager" +> ], +> "grants_permissions": [ +> "cancel_job", +> "manage_enumeration_record"] +> } +> # This is a sample user group that exceeds the minimum requirements. The minimum requirements are: +> # jsonmodel_type, description, and group_code. grants_permissions is optional, these values can be looked up in +> # the ASpace database within the permissions table +> +> post_user_group = client.post("repositories/:repo_id:/groups", json=new_group) +> # Replace :repo_id: with the ArchivesSpace repository ID and new_group with the json data to create a new user +> # group +> +> print(post_user_group.json()) +> # Output: +> # {'status': 'Created', 'id': 23, 'lock_version': 0, 'stale': None, 'uri': '/repositories/2/groups/23', +> # 'warnings': []} +> ``` + +### DELETE requests + +Delete requests using the API permanently deletes any record, just like within the staff interface. Be careful! Make +sure you want to delete the entire record before doing so. If you want to delete parts of a record, for example some +notes or other fields, see [Updating existing records](####Updating existing records). + +To delete a record, retrieve the record's ArchivesSpace generated ID and use the `DELETE` command for SHELL or +`client.delete`if using the ArchivesSnake Python library. + +##### Shell Example + +> ```shell +> # Delete a user group within the SHELL +> curl -s -F password="admin" "http://localhost:8089/users/admin/login" +> # Replace "admin" with your password and "http://localhost:8089" with your ASpace API URL followed by +> # "/users/{your_username}/login" +> +> set SESSION="session_id" +> # If using a unix-like shell, replace set with export +> +> curl -H "X-ArchivesSpace-Session: $SESSION" \\ +> -X DELETE "http://localhost:8089/repositories/:repo_id:/groups/:group_id:" +> # Replace "http://localhost:8089" with your ASpace API URL, :repo_id: with the repository ID, and +> # :group_id: with the ID of the group you want to delete (usually found in the URL of the user group when +> # viewing in the staff interface). Deleting is permanent so make sure to test this first! +> +> # Output: {"status":"Deleted","id":47} +> ``` + +##### Python Example + +> ```python +> # Delete a user group from a repository using Python and ArchivesSnake +> from asnake.client import ASnakeClient # import the ArchivesSnake client +> +> client = ASnakeClient(baseurl="http://localhost:8089", username="admin", password="admin") +> # replace http://localhost:8089 with your ArchivesSpace API URL and admin for your username and password +> +> client.authorize() # authorizes the client +> +> delete_user_group = client.delete("repositories/:repo_id:/groups/:group_id:") +> # Replace :repo_id: with the ArchivesSpace repository ID and :group_id: with the ArchivesSpace ID of the +> # user group you want to delete. Deleting is permanent so make sure to test this first! +> +> print(delete_user_group.json()) +> # Output: {'status': 'Deleted', 'id': 23} +> ``` diff --git a/src/content/docs/fr/architecture/api.md b/src/content/docs/fr/architecture/api.md new file mode 100644 index 0000000..474cf47 --- /dev/null +++ b/src/content/docs/fr/architecture/api.md @@ -0,0 +1,48 @@ +--- +title: API +description: Instructions for how to authenticate when trying to connect to a backend session, such as through the API, along with examples of common requests for getting and posting data. +--- + +:::note +See the [API section](/api/index) for more detailed documentation. +::: + +## Authentication + +Most actions against the backend require you to be logged in as a user +with the appropriate permissions. By sending a request like: + +``` +POST /users/admin/login?password=login +``` + +your authentication request will be validated, and a session token +will be returned in the JSON response for your request. To remain +authenticated, provide this token with subsequent requests in the +`X-ArchivesSpace-Session` header. For example: + +``` +X-ArchivesSpace-Session: 8e921ac9bbe9a4a947eee8a7c5fa8b4c81c51729935860c1adfed60a5e4202cb +``` + +## CRUD + +The ArchivesSpace API provides CRUD-style interactions for a number of +different "top-level" record types. Working with records follows a +fairly standard pattern: + +``` +# Get a paginated list of accessions from repository '123' +GET /repositories/123/accessions?page=1 + +# Create a new accession, returning the ID of the new record +POST /repositories/123/accessions +{... a JSON document satisfying JSONModel(:accession) here ...} + +# Get a single accession (returned as a JSONModel(:accession) instance) using the ID returned by the previous request +GET /repositories/123/accessions/456 + +# Update an existing accession +POST /repositories/123/accessions/456 +{... a JSON document satisfying JSONModel(:accession) here ...} +``` diff --git a/src/content/docs/fr/architecture/archivesspace_architecture.svg b/src/content/docs/fr/architecture/archivesspace_architecture.svg new file mode 100644 index 0000000..e7ded40 --- /dev/null +++ b/src/content/docs/fr/architecture/archivesspace_architecture.svg @@ -0,0 +1,105 @@ +<svg width="100%" viewBox="0 0 680 560" xmlns="http://www.w3.org/2000/svg"> +<defs> +<marker id="arrow" viewBox="0 0 10 10" refX="8" refY="5" markerWidth="6" markerHeight="6" orient="auto-start-reverse"> +<path d="M2 1L8 5L2 9" fill="none" stroke="context-stroke" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/> +</marker> +</defs> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="40" y="22" width="160" height="42" rx="8" stroke-width="0.5" style="fill:rgb(8, 80, 65);stroke:rgb(93, 202, 165);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="120" y="43" text-anchor="middle" dominant-baseline="central" style="fill:rgb(159, 225, 203);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Logged-in users</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="265" y="22" width="150" height="42" rx="8" stroke-width="0.5" style="fill:rgb(68, 68, 65);stroke:rgb(180, 178, 169);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="340" y="43" text-anchor="middle" dominant-baseline="central" style="fill:rgb(211, 209, 199);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Internet</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="480" y="22" width="160" height="42" rx="8" stroke-width="0.5" style="fill:rgb(113, 43, 19);stroke:rgb(240, 153, 123);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="560" y="43" text-anchor="middle" dominant-baseline="central" style="fill:rgb(245, 196, 179);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Anonymous users</text> +</g> + +<line x1="200" y1="43" x2="265" y2="43" stroke="#0F6E56" stroke-width="1.5" fill="none" marker-end="url(#arrow)" style="fill:none;stroke:rgb(15, 110, 86);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<line x1="480" y1="43" x2="415" y2="43" stroke="#993C1D" stroke-width="1.5" fill="none" marker-end="url(#arrow)" style="fill:none;stroke:rgb(153, 60, 29);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<path d="M310,64 C300,108 105,96 105,138" fill="none" stroke="#0F6E56" stroke-width="1.5" marker-end="url(#arrow)" style="fill:none;stroke:rgb(15, 110, 86);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<path d="M370,64 C380,108 547,96 547,138" fill="none" stroke="#993C1D" stroke-width="1.5" marker-end="url(#arrow)" style="fill:none;stroke:rgb(153, 60, 29);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<rect x="15" y="115" width="650" height="145" rx="12" fill="none" stroke="var(--color-border-secondary)" stroke-width="0.5" stroke-dasharray="6 4" style="fill:none;stroke:rgba(222, 220, 209, 0.3);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-dasharray:6px, 4px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="290" y="104" width="100" height="22" rx="11" stroke-width="0.5" style="fill:rgb(12, 68, 124);stroke:rgb(133, 183, 235);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="340" y="115" text-anchor="middle" dominant-baseline="central" style="fill:rgb(181, 212, 244);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Frontend</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="20" y="138" width="170" height="58" rx="8" stroke-width="0.5" style="fill:rgb(12, 68, 124);stroke:rgb(133, 183, 235);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="105" y="155" text-anchor="middle" dominant-baseline="central" style="fill:rgb(181, 212, 244);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Staff UI</text> +<text x="105" y="173" text-anchor="middle" dominant-baseline="central" style="fill:rgb(133, 183, 235);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JRuby · Rails · jQuery</text> +</g> +<line x1="36" y1="192" x2="174" y2="192" stroke="#0F6E56" stroke-width="2" stroke-linecap="round" style="fill:rgb(0, 0, 0);stroke:rgb(15, 110, 86);color:rgb(251, 251, 254);stroke-width:2px;stroke-linecap:round;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="248" y="138" width="170" height="58" rx="8" stroke-width="0.5" style="fill:rgb(12, 68, 124);stroke:rgb(133, 183, 235);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="333" y="158" text-anchor="middle" dominant-baseline="central" style="fill:rgb(181, 212, 244);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Background jobs</text> +<text x="333" y="176" text-anchor="middle" dominant-baseline="central" style="fill:rgb(133, 183, 235);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JRuby · Ruby</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="462" y="138" width="170" height="58" rx="8" stroke-width="0.5" style="fill:rgb(12, 68, 124);stroke:rgb(133, 183, 235);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="547" y="155" text-anchor="middle" dominant-baseline="central" style="fill:rgb(181, 212, 244);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Public UI</text> +<text x="547" y="173" text-anchor="middle" dominant-baseline="central" style="fill:rgb(133, 183, 235);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JRuby · Rails · jQuery</text> +</g> +<line x1="478" y1="192" x2="616" y2="192" stroke="#993C1D" stroke-width="2" stroke-linecap="round" style="fill:rgb(0, 0, 0);stroke:rgb(153, 60, 29);color:rgb(251, 251, 254);stroke-width:2px;stroke-linecap:round;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<line x1="190" y1="167" x2="248" y2="167" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<path d="M105,196 C105,258 80,258 80,330" fill="none" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<path d="M333,196 C333,262 120,262 120,330" fill="none" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<path d="M547,196 C547,268 160,268 160,330" fill="none" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<rect x="15" y="310" width="650" height="115" rx="12" fill="none" stroke="var(--color-border-secondary)" stroke-width="0.5" stroke-dasharray="6 4" style="fill:none;stroke:rgba(222, 220, 209, 0.3);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-dasharray:6px, 4px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="290" y="299" width="100" height="22" rx="11" stroke-width="0.5" style="fill:rgb(8, 80, 65);stroke:rgb(93, 202, 165);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="340" y="310" text-anchor="middle" dominant-baseline="central" style="fill:rgb(159, 225, 203);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Backend</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="50" y="330" width="185" height="68" rx="8" stroke-width="0.5" style="fill:rgb(8, 80, 65);stroke:rgb(93, 202, 165);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="142" y="352" text-anchor="middle" dominant-baseline="central" style="fill:rgb(159, 225, 203);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">ArchivesSpace API</text> +<text x="142" y="369" text-anchor="middle" dominant-baseline="central" style="fill:rgb(93, 202, 165);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JRuby · Sinatra</text> +<text x="142" y="385" text-anchor="middle" dominant-baseline="central" style="fill:rgb(93, 202, 165);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JSONModel</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="435" y="330" width="195" height="68" rx="8" stroke-width="0.5" style="fill:rgb(8, 80, 65);stroke:rgb(93, 202, 165);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="532" y="352" text-anchor="middle" dominant-baseline="central" style="fill:rgb(159, 225, 203);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Indexer</text> +<text x="532" y="369" text-anchor="middle" dominant-baseline="central" style="fill:rgb(93, 202, 165);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JRuby · Sinatra</text> +<text x="532" y="385" text-anchor="middle" dominant-baseline="central" style="fill:rgb(93, 202, 165);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JSONModel</text> +</g> + +<text x="340" y="346" text-anchor="middle" style="fill:rgb(194, 192, 182);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:auto">monitors updates</text> +<line x1="435" y1="359" x2="235" y2="359" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<rect x="15" y="450" width="650" height="95" rx="12" fill="none" stroke="var(--color-border-secondary)" stroke-width="0.5" stroke-dasharray="6 4" style="fill:none;stroke:rgba(222, 220, 209, 0.3);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-dasharray:6px, 4px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="290" y="439" width="100" height="22" rx="11" stroke-width="0.5" style="fill:rgb(99, 56, 6);stroke:rgb(239, 159, 39);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="340" y="450" text-anchor="middle" dominant-baseline="central" style="fill:rgb(250, 199, 117);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Storage</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="50" y="462" width="185" height="58" rx="8" stroke-width="0.5" style="fill:rgb(99, 56, 6);stroke:rgb(239, 159, 39);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="142" y="482" text-anchor="middle" dominant-baseline="central" style="fill:rgb(250, 199, 117);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">MySQL</text> +<text x="142" y="500" text-anchor="middle" dominant-baseline="central" style="fill:rgb(239, 159, 39);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">Primary data store</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="435" y="462" width="195" height="58" rx="8" stroke-width="0.5" style="fill:rgb(99, 56, 6);stroke:rgb(239, 159, 39);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="532" y="482" text-anchor="middle" dominant-baseline="central" style="fill:rgb(250, 199, 117);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Apache Solr</text> +<text x="532" y="500" text-anchor="middle" dominant-baseline="central" style="fill:rgb(239, 159, 39);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">Search index · Java</text> +</g> + +<line x1="142" y1="398" x2="142" y2="462" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<line x1="532" y1="398" x2="532" y2="462" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +</svg> \ No newline at end of file diff --git a/src/content/docs/fr/architecture/backend.md b/src/content/docs/fr/architecture/backend.md new file mode 100644 index 0000000..e44a9ad --- /dev/null +++ b/src/content/docs/fr/architecture/backend.md @@ -0,0 +1,422 @@ +--- +title: Backend +description: Describes the architecture behind the backend of ArchivesSpace, including the main.rb and rest.rb files for initiating ArchivesSpace and defining API mechanisms, controllers, models, nested records, relationships, agents, validation, optimistic concurrency control, and the permissions model. +--- + +The backend is responsible for implementing the ArchivesSpace API, and +supports the sort of access patterns shown in the previous section. +We've seen that the backend must support CRUD operations against a +number of different record types, and those records as expressed as +JSON documents produced from instances of JSONModel classes. + +The following sections describe how the backend fits together. + +## main.rb -- load and initialize the system + +The `main.rb` program is responsible for starting the ArchivesSpace +system: loading all controllers and models, creating +users/groups/permissions as needed, and preparing the system to handle +requests. + +When the system starts up, the `main.rb` program performs the +following actions: + +- Initializes JSONModel--triggering it to load all record schemas + from the filesystem and generate the classes that represent each + record type. +- Connects to the database +- Loads all backend models--the system's domain objects and + persistence layer +- Loads all controllers--defining the system's REST endpoints +- Starts the job scheduler--handling scheduled tasks such as backups + of the demo database (if used) +- Runs the "bootstrap ACLs" process--creates the admin user and + group if they don't already exist; creates the hidden global + repository; creates system users and groups. +- Fires the "backend started" notification to any registered + observers. + +In addition to handling the system startup, `main.rb` also provides +the following facilities: + +- Session handling--tracks authenticated backend sessions using the + token extracted from the `X-ArchivesSpace-Session` request header. +- Helper methods for accessing the current user and current session + of each request. + +## rest.rb -- Request and response handling for REST endpoints + +The `rest.rb` module provides the mechanism used to define the API's +REST endpoints. Each endpoint definition includes: + +- The URI and HTTP request method used to access the endpoint +- A list of typed parameters for that endpoint +- Documentation for the endpoint, each parameter, and each possible + response that may be returned +- Permission checks--predicates that the current user must satisfy + to be able to use the endpoint + +Each controller in the system consists of one or more of these +endpoint definitions. By using the endpoint syntax provided by +`rest.rb`, the controllers can declare the interface they provide, and +are freed of having to perform the sort of boilerplate associated +with request handling--check parameter types, coerce values from +strings into other types, and so on. + +The `main.rb` and `rest.rb` components work together to insulate the +controllers from much of the complexity of request handling. By the +time a request reaches the body of an endpoint: + +- It can be sure that all required parameters are present and of the + correct types. +- The body of the request has been fetched, parsed into the + appropriate type (usually a JSONModel instance--see below) and + made available as a request parameter. +- Any parameters provided by the client that weren't present in the + endpoint definition have been dropped. +- The user's session has been retrieved, and any defined access + control checks have been carried out. +- A connection to the database has been assigned to the request, and + a transaction has been opened. If the controller throws an + exception, the transaction will be automatically rolled back. + +## Controllers + +As touched upon in the previous section, controllers implement the +functionality of the ArchivesSpace API by registering one or more +endpoints. Each endpoint accepts a HTTP request for a given URI, +carries out the request and returns a JSON response (if successful) or +throws an exception (if something goes wrong). + +Each controller lives in its own file, and these can be found in the +`backend/app/controllers` directory. Since most of the request +handling logic is captured by the `rest.rb` module, controllers +generally don't do much more than coordinate the classes from the +model layer and send a response back to the client. + +### crud_helpers.rb -- capturing common CRUD controller actions + +Even though controllers are quite thin, there's still a lot of overlap +in their behaviour. Each record type in the system supports the same +set of CRUD operations, and from the controller's point of view +there's not much difference between an update request for an accession +and an update request for a digital object (for example). + +The `crud_helpers.rb` module pulls this commonality into a set of +helper methods that are invoked by each controller, providing methods +for the standard operations of the system. + +## Models + +The backend's model layer is where the action is. The model layer's +role is to bridge the gap between the high-level JSONModel objects +(complete with their properties, nested records, references to other +records, etc.) and the underlying relational database (via the Sequel +database toolkit). As such, the model layer is mainly concerned with +mapping JSONModel instances to database tables in a way that preserves +everything and allows them to be queried efficiently. + +Each record type has a corresponding model class, but the individual +model definitions are often quite sparse. This is because the +different record types differ in the following ways: + +- The set of properties they allow (and their types, valid values, + etc.) +- The types of nested records they may contain +- The types of relationships they may have with other record types + +The first of these--the set of allowable properties--is already +captured by the JSONModel schema definitions, so the model layer +doesn't have to enforce these restrictions. Each model can simply +take the values supplied by the JSONModel object it is passed and +assume that everything that needs to be there is there, and that +validation has already happened. + +The remaining two aspects _are_ enforced by the model layer, but +generally don't pertain to just a single record type. For example, an +accession may be linked to zero or more subjects, but so can several +other record types, so it doesn't make sense for the `Accession` model +to contain the logic for handling subjects. + +In practice we tend to see very little functionality that belongs +exclusively to a single record type, and as a result there's not much +to put in each corresponding model. Instead, models are generally +constructed by combining a number of mix-ins (Ruby modules) to satisfy +the requirements of the given record type. Features à la carte! + +### ASModel and other mix-ins + +At a minimum, every model includes the `ASModel` mix-in, which provides +base versions of the following methods: + +- `Model.create_from_json` -- Take a JSONModel instance and create a + model instance (a subclass of Sequel::Model) from it. Returns the + instance. +- `model.update_from_json` -- Update the target model instance with + the values from a given JSONModel instance. +- `Model.sequel_to_json` -- Return a JSONModel instance of the appropriate + type whose values are taken from the target model instance. + Model classes are declared to correspond to a particular JSONModel + instance when created, so this method can automatically return a + JSONModel instance of the appropriate type. + +These methods comprise the primary interface of the model layer: +virtually every mix-in in the model layer overrides one or all of +these to add behaviour in a modular way. + +For example, the 'notes' mix-in adds support for multiple notes to be +added to a record type--by mixing this module into a model class, that +class will automatically accept a JSONModel property called 'notes' +that will be stored and retrieved to and from the database as needed. +This works by overriding the three methods as follows: + +- `Model.create_from_json` -- Call 'super' to delegate the creation to + the next mix-in in the chain. When it returns the newly created + object, extract the notes from the JSONModel instance and attach + them to the model instance (saving them in the database). +- `model.update_from_json` -- Call 'super' to save the other updates + to the database, then replace any existing notes entries for the + record with the ones provided by the JSONModel. +- `Model.sequel_to_json` -- Call 'super' to have the next mix-in in + the chain create a JSONModel instance, then pull the stored notes + from the database and poke them into it. + +All of the mix-ins follow this pattern: call 'super' to delegate the +call to the next mix-in in the chain (eventually reaching ASModel), +then manipulate the result to implement the desired behaviour. + +### Nested records + +Some record types, like accessions, digital objects, and subjects, are +_top-level records_, in the sense that they are created independently +of any other record and are addressable via their own URI. However, +there are a number of records that can't exist in isolation, and only +exist in the context of another record. When one record can contain +instances of another record, we call them _nested records_. + +To give an example, the `date` record type is nested within an +`accession` record (among others). When the model layer is asked to +save a JSONModel instance containing nested records, it must pluck out +those records, save them in the appropriate database table, and ensure +that linkages are created within the database to allow them to be +retrieved later. + +This happens often enough that it would be tedious to write code for +each model to handle its nested records, so the ASModel mix-in +provides a declaration to handle this automatically. For example, the +`accession` model uses a definition like: + +```ruby +base.def_nested_record(:the_property => :dates, + :contains_records_of_type => :date, + :corresponding_to_association => :date) +``` + +When creating an accession, this declaration instructs the `Accession` +model to create a database record for each date listed in the "dates" +property of the incoming record. Each of these date records will be +automatically linked to the created accession. + +### Relationships + +A relationship is a link between two top-level records, where the link +is a separate, dynamically generated, model with zero or more +properties of its own. + +For example, the `Event` model can be related to several different +types of records: + +```ruby +define_relationship(:name => :event_link, + :json_property => 'linked_records', + :contains_references_to_types => proc {[Accession, Resource, ArchivalObject]}) +``` + +This declaration generates a custom class that models the relationship +between events and the other record types. The corresponding JSON +schema declaration for the `linked_records` property looks like this: + +```ruby +"linked_records" => { + "type" => "array", + "ifmissing" => "error", + "minItems" => 1, + "items" => { + "type" => "object", + "subtype" => "ref", + "properties" => { + "role" => { + "type" => "string", + "dynamic_enum" => "linked_event_archival_record_roles", + "ifmissing" => "error", + }, + "ref" => { + "type" => [{"type" => "JSONModel(:accession) uri"}, + {"type" => "JSONModel(:resource) uri"}, + {"type" => "JSONModel(:archival_object) uri"}, + ...], + "ifmissing" => "error" + }, + ... +``` + +That is, the property includes URI references to other records, plus +an additional "role" property to indicate the nature of the +relationship. The corresponding JSON might then be: + +```ruby +linked_records: [{ref: '/repositories/123/accessions/456', role: 'authorizer'}, ...] +``` + +The `define_relationship` definition automatically makes use of the +appropriate join tables in the database to store this relationship and +retrieve it later as needed. + +### Agents and `agent_manager.rb` + +Agents present a bit of a representational challenge. There are four +types of agents (person, family, corporate entity, software), and at a +high-level they are structured in the same way: each type can contain +one or more name records, zero or more contact records, and a number +of properties. Records that link to agents (via a relationship, for +example) can link to any of the four types so, in some sense, each +agent type implements a common `Agent` interface. + +However, the agent types differ in their details. Agents contain name +records, but the types of those name records correspond to the type of +the agent: a person agent contains a person name record, for example. +So, in spite of their similarities, the different agents need to be +modelled as separate record types. + +The `agent_manager` module captures the high-level similarities +between agents. Each agent model includes the agent manager mix-in: + +```ruby +include AgentManager::Mixin +``` + +and then defines itself declaratively by the provided class method: + +```ruby +register_agent_type(:jsonmodel => :agent_person, + :name_type => :name_person, + :name_model => NamePerson) +``` + +This definition sets up the properties of that agent. It creates: + +- a one_to_many relationship with the corresponding name + type of the agent. +- a one_to_many relationship with the agent_contact table. +- nested record definition which defines the names list of the agent + (so the list of names for the agent are automatically stored in + and retrieved from the database) +- a nested record definition for contact list of the agent. + +## Validations + +As records are added to and updated within the ArchivesSpace system, +they are validated against a number of rules to make sure they are +well-formed and don't conflict with other records. There are two +types of record validation: + +- Record-level validations check that a record is self-consistent: + that it contains all required fields, that its values are of the + appropriate type and format, and that its fields don't contradict + one another. +- System-level validations check that a record makes sense in a + broader context: that it doesn't share a unique identifier with + another record, and that any record it references actually exists. + +Record-level validations can be performed in isolation, while +system-level records require comparing the record to others in the +database. + +System-level validations need to be implemented in the database itself +(as integrity constraints), but record-level validations are often too +complex to be expressed this way. As a result, validations in +ArchivesSpace can appear in one or both of the following layers: + +- At the JSONModel level, validations are captured by JSON schema + documents. Where more flexibility is needed, custom validations + are added to the `common/validations.rb` file, allowing validation + logic to be expressed using arbitrary Ruby code. +- At the database level, validations are captured using database + constraints. Since the error messages yielded by these + constraints generally aren't useful for users, database + constraints are also replicated in the backend's model layer using + Sequel validations, which give more targeted error messages. + +As a general rule, record-level validations are handled by the +JSONModel validations (either through the JSON schema or custom +validations), while system-level validations are handled by the model +and the database schema. + +## Optimistic concurrency control + +Updating a record using the ArchivesSpace API is a two part process: + +```ruby +# Perform a `GET` against the desired record to fetch its JSON +# representation: + +GET /repositories/5/accessions/2 + +# Manipulate the JSON representation as required, and then `POST` +# it back to replace the original: + +POST /repositories/5/accessions/2 +``` + +If two people do this simultaneously, there's a risk that one person +would silently overwrite the changes made by the other. To prevent +this, every record is marked with a version number that it carries in +the `lock_version` property. When the system receives the updated +copy of a record, it checks that the version it carries is still +current; if the version number doesn't match the one stored in the +database, the update request is rejected and the user must re-fetch +the latest version before applying their update. + +## The ArchivesSpace permissions model + +The ArchivesSpace backend enforces access control, defining which +users are allowed to create, read, update, suppress and delete the +records in the system. The major actors in the permissions model are: + +- Repositories -- The main mechanism for partitioning the + ArchivesSpace system. For example, an instance might contain one + repository for each section of an organisation, or one repository + for each major collection. +- Users -- An entity that uses the system--often a person, but + perhaps a consumer of the ArchivesSpace API. The set of users is + global to the system, and a single user may have access to + multiple repositories. +- Records -- A unit of information in the system. Some records are + global (existing outside of any given repository), while some are + repository-scoped (belonging to a single repository). +- Groups -- A set of users _within_ a repository. Each group is + assigned zero or more permissions, which it confers upon its + members. +- Permissions -- An action that a user can perform. For example, A + user with the `update_accession_record` permission is allowed to + update accessions for a repository. + +To summarize, a user can perform an action within a repository if they +are a member of a group that has been assigned permission to perform +that action. + +### Conceptual trickery + +Since they're repository-scoped, groups govern access to repositories. +However, there are several record types that exist at the top-level of +the system (such as the repositories themselves, subjects and agents), +and the permissions model must be able to accommodate these. + +To get around this, we invent a concept: the "global" repository +conceptually contains the whole ArchivesSpace universe. As with other +repositories, the global repository contains groups, and users can be +made members of these groups to grant them permissions across the +entire system. One example of this is the "admin" user, which is +granted all permissions by the "administrators" group of the global +repository; another is the "search indexer" user, which can read (but +not update or delete) any record in the system. diff --git a/src/content/docs/fr/architecture/database.md b/src/content/docs/fr/architecture/database.md new file mode 100644 index 0000000..37609e0 --- /dev/null +++ b/src/content/docs/fr/architecture/database.md @@ -0,0 +1,554 @@ +--- +title: Database +description: Describes the structure of the ArchivesSpace database, including a breakdown between the main, supporting, subrecord, relationship, enumerations, user-setting-permissions, job, and system tables. It also breaks down the specific fields present in the different tables. +--- + +The ArchivesSpace database stores all data that is created within an ArchivesSpace instance. As described in other sections of this documentation, the backend code - particularly the model layer and `ASModel_crud.rb` file - uses the `Sequel` database toolkit to bridge the gap between this underlying data and the JSON objects which are exchanged by the other components of the system. + +Often, querying the database directly is the most efficient and powerful way to retrieve data from ArchivesSpace. It is also possible to use raw SQL queries to create custom reports that can be run by users in the staff interface. Please consult the [Custom Reports](/customization/reports) section of this documentation for additional information on creating custom reports. + +<!-- .See this [plugin](link-to-plugin) for an example. Also --> + +It is recommended that ArchivesSpace be run against MySQL in production, not the included demo database. Instructions on setting up ArchivesSpace to run against MySQL are [here](/provisioning/mysql). + +The examples in this section are written for MySQL. There are many freely-available tutorials on the internet which can provide guidance to those unfamiliar with MySQL query syntax and the features of the language. + +**NOTE**: the documentation below is current through database schema version 129, application version 2.7.1. + +## Database Overview + +The ArchivesSpace database schema and it's mapping to the JSONModel objects used by the other parts of the system is defined by the files in the `common/schemas` and `backend/models` directories. The database itself is created via the `setup-database` script in the `scripts` directory. This script runs the migrations in the `common/db/migrations` directory. + +The tables in the ArchivesSpace database can be grouped into several general categories: + +- [Database Overview](#database-overview) +- [Main record tables](#main-record-tables) +- [Supporting record tables](#supporting-record-tables) +- [Subrecord tables](#subrecord-tables) +- [Relationship tables](#relationship-tables) +- [Enumerations](#enumerations) +- [User, setting, and permission tables](#user-setting-and-permission-tables) +- [Job tables](#job-tables) +- [System tables](#system-tables) +- [Parent-Child Relationships and Sequencing](#parent-child-relationships-and-sequencing) + - [Repository-scoped records](#repository-scoped-records) + - [Parent/child relationships](#parentchild-relationships) + - [Sequencing](#sequencing) +- [Boolean fields](#boolean-fields) +- [Read-Only Fields](#read-only-fields) + +One way to get a view of all tables and columns in your ArchivesSpace instance is to run the following query in a MySQL client: + +```sql +SELECT TABLE_SCHEMA + , TABLE_NAME + , COLUMN_NAME + , ORDINAL_POSITION + , IS_NULLABLE + , COLUMN_TYPE + , COLUMN_KEY +FROM INFORMATION_SCHEMA.COLUMNS +#change the following value to whatever your database is named +WHERE TABLE_SCHEMA Like 'archivesspace' +``` + +Additionally, a BETA version of an [ArchivesSpace data dictionary](https://github.com/archivesspace/data-dictionary-initial) has been created by members of the ArchivesSpace development team and the ArchivesSpace User Advisory Council Reports team. + +## Main record tables + +These tables hold data about the primary record types in ArchivesSpace. Main record types are distinguished from subrecords in that they have their own persistent URIs - corresponding to their database identifiers/primary keys - that are resolvable via the staff interface, public interface, and API. They are distinguished from supporting records in that they are the primary descriptive record types that users will interact with in the system. + +All of these records, except archival objects, can be created independently of any other record. Archival object records represent components of a larger entity, and so they must have a resource record as a root parent. See the [parent/child relationships](#parent-child-relationships-and-sequencing) section for more information about the representation of hierarchical relationships in the database. + +A few common fields occur in several main record tables. These similar fields are defined by the parent schemas in the `common/schemas` directory: + +| Column Name | Tables | +| ----------------------------------------------- | ---------------------------------------------------------------------------------------- | +| `title` | `accession`, `archival_object`, `digital_object`, `digital_object_component`, `resource` | +| `identifier`/`component_id`/`digital_object_id` | `accession`, `resource`/`archival_object`, `digital_object_component`/`digital_object` | +| `other_level` | `archival_object`, `resource` | +| `repository_processing_note` | `archival_object`, `resource` | + +<!-- Booleans --> + +All of the main records have a set of fields which store boolean values (`0` or `1`) that indicate whether the records are published in the public user interface, suppressed in the staff interface, or have some kind of applicable restriction. The exception to this is the `repository` table, which does not have a restriction boolean, but does have a `hidden` boolean. The `accession` table has multiple restriction-related booleans. See the section below for more information about boolean fields. + +Beginning in version 2.6.0, the main record tables (and some supporting records - see below) also contain fields which hold data about archival resource keys (ARKs) and human-readable URLs (slugs): + +| Column Name | Tables | +| ------------------ | ------------------------------------------------------------------------------------------------------ | +| `slug` | `accession`, `archival_object`, `digital_object`, `digital_object_component`, `repository`, `resource` | +| `external_ark_url` | `archival_object`, `resource` | + +Also stored in these and all other tables are enumeration values, foreign keys which correspond to database identifiers in the `enumeration_value` table, which stores controlled values. See enumeration section below for more detail. + +All subrecord data types - i.e. dates, extents, instances - relating to a main or supporting record are stored in their own tables and linked to main or supporting records via foreign key references in the subrecord tables. See subrecord section below for more detail. + +The remaining data in the main record tables is text, and is unique to each table: + +| TABLE_NAME | COLUMN_NAME | IS_NULLABLE | COLUMN_TYPE | COLUMN_KEY | +| -------------------------- | ------------------------------- | ----------- | ------------ | ---------- | +| `accession` | `content_description` | YES | text | | +| `accession` | `condition_description` | YES | text | | +| `accession` | `disposition` | YES | text | | +| `accession` | `inventory` | YES | text | | +| `accession` | `provenance` | YES | text | | +| `accession` | `general_note` | YES | text | | +| `accession` | `accession_date` | YES | date | | +| `accession` | `retention_rule` | YES | text | | +| `accession` | `access_restrictions_note` | YES | text | | +| `accession` | `use_restrictions_note` | YES | text | | +| `archival_object` | `ref_id` | NO | varchar(255) | MUL | +| `digital_object_component` | `label` | YES | varchar(255) | | +| `repository` | `repo_code` | NO | varchar(255) | UNI | +| `repository` | `name` | NO | varchar(255) | | +| `repository` | `org_code` | YES | varchar(255) | | +| `repository` | `parent_institution_name` | YES | varchar(255) | | +| `repository` | `url` | YES | varchar(255) | | +| `repository` | `image_url` | YES | varchar(255) | | +| `repository` | `contact_persons` | YES | text | | +| `repository` | `description` | YES | text | | +| `repository` | `oai_is_disabled` | YES | int | | +| `repository` | `oai_sets_available` | YES | text | | +| `resource` | `ead_id` | YES | varchar(255) | | +| `resource` | `ead_location` | YES | varchar(255) | | +| `resource` | `finding_aid_title` | YES | text | | +| `resource` | `finding_aid_filing_title` | YES | text | | +| `resource` | `finding_aid_date` | YES | varchar(255) | | +| `resource` | `finding_aid_author` | YES | text | | +| `resource` | `finding_aid_language_note` | YES | varchar(255) | | +| `resource` | `finding_aid_sponsor` | YES | text | | +| `resource` | `finding_aid_edition_statement` | YES | text | | +| `resource` | `finding_aid_series_statement` | YES | text | | +| `resource` | `finding_aid_note` | YES | text | | +| `resource` | `finding_aid_subtitle` | YES | text | | + +<!-- arguably top contsainers should be here, or digital objects should be in the supporting records --> + +## Supporting record tables + +Like the main record types listed above, supporting records can also be created independently of other records, and are addressable in the staff interface and API via their own URI. However, they are primarily meaningful via their many-to-many linkages to the main record types (and, sometimes, other supporting record types). These records typically provide additional information about, or otherwise enhance, the primary record types. A few supporting record types - for instance those in the `term` table - are used to enhance other supporting record types. + +| Supporting module tables | Linked to | +| --------------------------------- | --------------------------------------------------- | +| `agent_corporate_entity` | +| `agent_family` | +| `agent_person` | +| `agent_software` | +| `assessment` | +| `classification` | `accession`, `resource` | +| `classification_term` | `classification`, `accession`, `resource` | +| `container_profile` | `top_container` | +| `event` | +| `location` | +| `location_profile` | `location` | +| `subject` | `resource`, `archival_object` | +| `term` | `subject` | +| `top_container` | +| `vocabulary` | `subject`, `term` | +| `assessment_attribute_definition` | `assessment_attribute`, `assessment_attribute_note` | + +<!-- is this the appropriate place for the assessment attribute def? Vocabulary? --> + +## Subrecord tables + +<!-- link to ### Nested records section of the backend readme --> + +Subrecords must be associated with a main or supporting record - they cannot be created independently. As such, they do not have their own URIs, and can only be accessed via the API by retrieving the top-level record with which they are associated. In the staff interface these records are embedded within main or supporting record views. In the API subrecord data is contained in arrays within main or supporting records. + +The various subrecord types do have their own database tables. In addition to data specific to the subrecord type, the tables also contain foreign key columns which hold the database identifiers of main or supporting records. Subrecord tables must have a value in one of the foreign key fields. Some subrecords can have another subrecord as parent (for instance, the `sub_container` subrecord has `instance_id` as its foreign key column). + +Subrecords exist in a one-to-many relationship with their parent records, so a record's `id` may appear multiple times in a subrecord table (i.e. when there are two dates associated with a resource record). + +It is important to note that subrecords are deleted and recreated upon each save of the main or supporting record with which they are associated, regardless of whether the subrecord itself is modified. This means that the database identifier is deleted and reassigned upon each save. + +| Subrecord tables | Foreign keys | +| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `agent_contact` | `agent_person_id`, `agent_family_id`, `agent_corporate_entity_id`, `agent_software_id` | +| `date` | `accession_id`, `deaccession_id`, `archival_object_id`, `resource_id`, `event_id`, `digital_object_id`, `digital_object_component_id`, `related_agents_rlshp_id`, `agent_person_id`, `agent_family_id`, `agent_corporate_entity_id`, `agent_software_id`, `name_person_id`, `name_family_id`, `name_corporate_entity_id`, `name_software_id` | +| `extent` | `accession_id`, `deaccession_id`, `archival_object_id`, `resource_id`, `digital_object_id`, `digital_object_component_id` | +| `external_document` | `accession_id`, `archival_object_id`, `resource_id`, `subject_id`, `agent_person_id`, `agent_family_id`, `agent_corporate_entity_id`, `agent_software_id`, `rights_statement_id`, `digital_object_id`, `digital_object_component_id`, `event_id` | +| `external_id` | `subject_id`, `accession_id`, `archival_object_id`, `collection_management_id`, `digital_object_id`, `digital_object_component_id`, `event_id`, `location_id`, `resource_id` | +| `file_version` | `digital_object_id`, `digital_object_component_id` | +| `instance` | `resource_id`, `archival_object_id`, `accession_id` | +| `name_authority_id` | `name_person_id`, `name_family_id`, `name_software_id`, `name_corporate_entity_id` | +| `name_corporate_entity` | `agent_corporate_entity_id` | +| `name_family` | `agent_family_id` | +| `name_person` | `agent_person_id` | +| `name_software` | `agent_software_id` | +| `note` | `resource_id`, `archival_object_id`, `digital_object_id`, `digital_object_component_id`, `agent_person_id`, `agent_corporate_entity_id`, `agent_family_id`, `agent_software_id`, `rights_statement_act_id`, `rights_statement_id` | +| `note_persistent_id` | `note_id`, `parent_id` | +| `revision_statement` | `resource_id` | +| `rights_restriction` | `resource_id`, `archival_object_id` | +| `rights_restriction_type` | `rights_restriction_id` | +| `rights_statement` | `accession_id`, `archival_object_id`, `resource_id`, `digital_object_id`, `digital_object_component_id`, `repo_id` | +| `rights_statement_act` | `rights_statement_id` | +| `sub_container` | `instance_id` | +| `telephone` | `agent_contact_id` | +| `user_defined` | `accession_id`, `resource_id`, `digital_object_id` | +| `ark_name` | `archival_object_id`, `resource_id` | +| `assessment_attribute_note` | `assessment_id` | +| `assessment_attribute` | `assessment_id` | +| `lang_material` | `archival_object_id`, `resource_id`, `digital_object_id`, `digital_object_component_id` | +| `language_and_script` | `lang_material_id` | +| `collection_management` | `accession_id`, `resource_id`, `digital_object_id` | +| `location_function` | `location_id` | + +<!-- appropriate place for collection management and deaccession stuff? what about location function? all the rights statement stuff? Is there a specific thing that defines a subrecord as a subrecord? --> + +## Relationship tables + +These tables exist to enable linking between main records and supporting records. Relationship tables are necessary because, unlike subrecord tables, supporting record tables do not include foreign keys which link them to the main record tables. + +Most relationship tables have the `_rlshp` suffix in their names. They typically contain just the primary keys for the tables that are being linked, though a few tables also include fields that are specific to the relationship between the two record types. + +| Relationship/linking tables | Tables linked | +| ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `assessment_reviewer_rlshp` | `assessment` to `agent_person` | +| `assessment_rlshp` | `assessment` to `accession`, `archival_object`, `resource`, or `digital_object` | +| `classification_creator_rlshp` | `classification` to `agent_person`, `agent_family`, `agent_corporate_entity`, or `agent_software` | +| `classification_rlshp` | `classification` or `classification_term` to `resource` or `accession` | +| `classification_term_creator_rlshp` | `classification_term` to `agent_person`, `agent_family`, `agent_corporate_entity`, or `agent_software` | +| `event_link_rlshp` | `event` to `accession`, `resource`, `archival_object`, `digital_object`, `digital_object_component`, `agent_person`, `agent_family`, `agent_corporate_entity`, `agent_software`, or `top_container`. Also includes the `role_id` table, which can be joined with the `enumeration_value` table to return the event role (source, outcome, transfer, context) | +| `instance_do_link_rlshp` | `digital_object` to `instance` | +| `linked_agents_rlshp` | `agent_person`, `agent_software`, `agent_family`, or `agent_corporate_entity` to `accession`, `archival_object`, `digital_object`, `digital_object_component`, `event`, or `resource`. Also includes the `role_id` and `relator_id` tables, which can be joined with the `enumeration_value` table | +| `location_profile_rlshp` | `location` to `location_profile` | +| `owner_repo_rlshp` | `location` to `repository` | +| `related_accession_rlshp` | Links a row in the `accession` table to another row in the `accession` table. Also includes fields for `relator` and relationship type. | +| `related_agents_rlshp` | `agent_person`, `agent_corporate_entity`, `agent_software`, or `agent_family` to other agent tables, or two rows in the same agent table. Also includes fields for `relator` and `description`, and the type of relationship. | +| `spawned_rlshp` | `accession` to `resource`. This contains all linked accession data, even if the resource was not spawned from the accession record. | +| `subject_rlshp` | `subject` to `accession`, `archival_object`, `resource`, `digital_object`, or `digital_object_component` | +| `surveyed_by_rlshp` | `assessment` to `agent_person` | +| `top_container_housed_at_rlshp` | `top_container` to `location`. Also includes fields for `start_date`, `end_date`, `status`, and a free-text `note`. | +| `top_container_link_rlshp` | `top_container` to `sub_container` | +| `top_container_profile_rlshp` | `top_container` to `container_profile` | +| `subject_term` | `subject` to `term` | +| `linked_agent_term` | `linked_agents_rlshp` to `term` | + +<!-- is the assessment definition thing a linking table - it pretty much only has foreign keys + +Same question about one of the rights restriction tables - can't remember which one right now. + --> + +It is not always obvious which relationship tables will provide the desired results. For instance, to get a box list for a given resource record, enter the following query into a MySQL editor: + +```sql +SELECT DISTINCT CONCAT('/repositories/', resource.repo_id, '/resources/', resource.id) as resource_uri + , resource.identifier + , resource.title + , tc.barcode as barcode + , tc.indicator as box_number +FROM sub_container sc +JOIN top_container_link_rlshp tclr on tclr.sub_container_id = sc.id +JOIN top_container tc on tclr.top_container_id = tc.id +JOIN instance on sc.instance_id = instance.id +JOIN archival_object ao on instance.archival_object_id = ao.id +JOIN resource on ao.root_record_id = resource.id +#change to your desired resource id +WHERE resource.id = 4556 +``` + +Sometimes numerous relationship tables must be joined to retrieve the desired results. For instance, to get all boxes and folders for a given resource record, including any container profiles and locations, enter the following query into a MySQL editor: + +```sql +SELECT CONCAT('/repositories/', tc.repo_id, '/top_containers/', tc.id) as tc_uri + , CONCAT('/repositories/', resource.repo_id, '/resources/', resource.id) as resource_uri + , CONCAT('/repositories/', resource.repo_id) as repo_uri + , CONCAT('/repositories/', ao.repo_id, '/archival_objects/', ao.id) as ao_uri + , resource.identifier AS resource_identifier + , resource.title AS resource_title + , ao.display_string AS ao_title + , ev2.value AS level + , tc.barcode AS barcode + , cp.name AS container_profile + , tc.indicator AS container_num + , ev.value AS sc_type + , sc.indicator_2 AS sc_num +from sub_container sc +JOIN top_container_link_rlshp tclr on tclr.sub_container_id = sc.id +JOIN top_container tc on tclr.top_container_id = tc.id +LEFT JOIN top_container_profile_rlshp tcpr on tcpr.top_container_id = tc.id +LEFT JOIN container_profile cp on cp.id = tcpr.container_profile_id +LEFT JOIN top_container_housed_at_rlshp tchar on tchar.top_container_id = tc.id +JOIN instance on sc.instance_id = instance.id +JOIN archival_object ao on instance.archival_object_id = ao.id +JOIN resource on ao.root_record_id = resource.id +LEFT JOIN enumeration_value ev on ev.id = sc.type_2_id +LEFT JOIN enumeration_value ev2 on ev2.id = ao.level_id +#change to your desired resource id +WHERE resource.id = 4223 + +``` + + <!-- Mention the CONCAT function for creating URIs --> + +## Enumerations + +All controlled values used by the application - excluding tool-tips and frontend/public display values and the values that are stored a few of the supporting record tables (see below) - are stored in a table called `enumeration_values`. Controlled values are organized into a variety of parent enumerations (akin to a set of distinct controlled value lists) which are utilized by different record and subrecord types. Parent enumeration data is stored in the `enumeration` table and is linked by foreign key in the `enumeration_id` field in the `enumeration_value` table. In the record and subrecord tables, enumeration values appear as foreign keys in a variety of foreign key columns, usually identified by an `_id` suffix. + +ArchivesSpace comes with a standard set of controlled values, but most of these are modifiable by end-users via the staff interface and API. However, some values in the `enumeration_value` table are read-only - these values define the terminology and data types used in different parts of the application (i.e. the various note types). + +Enumeration IDs appear as foreign keys in a variety of database tables: + +| table_name | column_name | enumeration_name | +| -------------------------- | ---------------------------------- | -------------------------------------------------- | +| `accession` | `acquisition_type_id` | accession_acquisition_type | +| `accession` | `resource_type_id` | accession_resource_type | +| `agent_contact` | `salutation_id` | agent_contact_salutation | +| `archival_object` | `level_id` | archival_record_level | +| `collection_management` | `processing_priority_id` | collection_management_processing_priority | +| `collection_management` | `processing_status_id` | collection_management_processing_status | +| `collection_management` | `processing_total_extent_type_id` | extent_extent_type_id | +| `container_profile` | `dimension_units_id` | dimension_units | +| `date` | `calendar_id` | date_calendar | +| `date` | `certainty_id` | date_certainty | +| `date` | `date_type_id` | date_type | +| `date` | `era_id` | date_era | +| `date` | `label_id` | date_label | +| `deaccession` | `scope_id` | deaccession_scope | +| `digital_object` | `digital_oject_type_id` | digital_object_digital_object_type | +| `digital_object` | `level_id` | digital_object_level | +| `event` | `event_type_id` | event_event_type | +| `event` | `outcome_id` | event_outcome | +| `extent` | `extent_type_id` | extent_extent_type | +| `extent` | `portion_id` | extent_portion | +| `external_document` | `identifier_type_id` | rights_statement_external_document_identifier_type | +| `file_version` | `checksum_method_id` | file_version_checksum_methods | +| `file_version` | `file_format_name_id` | file_version_file_format_name | +| `file_version` | `use_statement_id` | file_version_use_statement | +| `file_version` | `xlink_actuate_attribute_id` | file_version_xlink_actuate_attribute | +| `file_version` | `xlink_show_attribute_id` | file_version_xlink_show_attribute | +| `instance` | `instance_type_id` | instance_instance_type | +| `language_and_script` | `language_id` | +| `language_and_script` | `script_id` | +| `location` | `temporary_id` | location_temporary | +| `location_function` | `location_function_type_id` | location_function_type | +| `location_profile` | `dimension_units_id` | dimension_units | +| `name_corporate_entity` | `rules_id` | name_rule | +| `name_corporate_entity` | `source_id` | name_source | +| `name_family` | `rules_id` | name_rule | +| `name_family` | `source_id` | name_source | +| `name_person` | `name_order_id` | name_person_name_order | +| `name_person` | `rules_id` | name_rule | +| `name_person` | `source_id` | name_source | +| `name_software` | `rules_id` | name_rule | +| `name_software` | `source_id` | name_source | +| `repository` | `country_id` | country_iso_3166 | +| `resource` | `finding_aid_description_rules_id` | resource_finding_aid_description_rules | +| `resource` | `finding_aid_language_id` | +| `resource` | `finding_aid_script_id` | +| `resource` | `finding_aid_status_id` | resource_finding_aid_status | +| `resource` | `level_id` | archival_record_level | +| `resource` | `resource_type_id` | resource_resource_type | +| `rights_restriction_type` | `restriction_type_id` | restriction_type | +| `rights_statement` | `jurisdiction_id` | +| `rights_statement` | `other_rights_basis_id` | rights_statement_other_rights_basis | +| `rights_statement` | `rights_type_id` | rights_statement_rights_type | +| `rights_statement` | `status_id` | +| `rights_statement_act` | `act_type_id` | rights_statement_act_type | +| `rights_statement_act` | `restriction_id` | rights_statement_act_restriction | +| `rights_statement_pre_088` | `ip_status_id` | rights_statement_ip_status | +| `rights_statement_pre_088` | `jurisdiction_id` | +| `rights_statement_pre_088` | `rights_type_id` | rights_statement_rights_type | +| `sub_container` | `type_2_id` | container_type | +| `sub_container` | `type_3_id` | container_type | +| `subject` | `source_id` | subject_source | +| `telephone` | `number_type_id` | telephone_number_type | +| `term` | `term_type_id` | subject_term_type | +| `top_container` | `type_id` | container_type | + +<!-- need to add some rlshp tables which have enums --> + +To translate the enumeration ID that appears in the record and subrecord tables, join the `enumeration_value` table. The table can be joined multiple times if there are multiple values to translate, but you must use an alias for each table. For example: + +```sql +SELECT CONCAT('/repositories/', ao.repo_id, '/archival_objects/', ao.id) as ao_uri + , ao.display_string as ao_title + , date.begin + , date.end + , ev.value as date_label + , ev2.value as date_type + , ev3.value as date_calendar +FROM archival_object ao +LEFT JOIN date on date.archival_object_id = ao.id +LEFT JOIN enumeration_value ev on ev.id = date.label_id +LEFT JOIN enumeration_value ev2 on ev2.id = date.date_type_id +LEFT JOIN enumeration_value ev3 on ev3.id = date.calendar_id +``` + +**NOTE**: `container_profile`, `location_profile`, and `assessment_attribute_definition` records are similar to the records in the `enumeration_value` table in that they store controlled values which are referenced by other parts of the system. However, they differ in that they have their own tables and are addressable via their own URIs. + +## User, setting, and permission tables + +These tables store user and permissions information, user/repository/global preferences, and RDE and custom report templates. + +| Table name | Description | +| ------------------------ | ------------------------------------------------------- | +| `custom_report_template` | Custom report templates | +| `default_values` | Default values settings | +| `group` | Data about permission groups created by each repository | +| `group_permission` | Links the permission table to the group table | +| `group_user` | Links the group table to the user table | +| `oai_config` | Configuration data for OAI-PMH harvesting | +| `permission` | All permission types that can be assigned to users | +| `preference` | User preference data | +| `rde_template` | RDE templates | +| `required_fields` | Contains repository-defined required fields | +| `user` | User data | + +## Job tables + +These tables store data related to background jobs, including imports. + +| Table name | Description | +| --------------------- | ---------------------------------------------------------- | +| `job` | All jobs which have been run in an ArchivesSpace instance. | +| `job_created_record` | Records created via background jobs | +| `job_input_file` | Data about input files used in background jobs | +| `job_modified_record` | Data about records modified via background jobs | + +## System tables + +These tables track actions taken against the database (i.e. edits and deletes), system events, session and authorization data, and database information. These tables are typically not referenced by any other table. + +| Table name | Description | +| ----------------- | --------------------------------------------------------------------------------------------------- | +| `active_edit` | Records being actively edited by a user. Read-only system table | +| `auth_db` | Authentication data for users. Read-only system table | +| `deleted_records` | Records deleted in the past 24 hours. Read-only system table | +| `notification` | Notifications stream. Read-only system table | +| `schema_info` | Contains the database schema version. Read-only system table. | +| `sequence` | The value corresponds to the number of children the archival object has - 1. Read-only system table | +| `session` | Recent session data. Read-only system table | +| `system_event` | System event data. Read-only system table | + +<!-- these are subrecords --> +<!-- | subnote_metadata | +| rights_statement_pre_088 | --> + +## Parent-Child Relationships and Sequencing + +### Repository-scoped records + +Many main and supporting records are scoped to a particular repository. In these tables the parent repository is identified by a foreign key which corresponds to the database identifier in the `repository` table: + +| Column name | Description | Example | Found in | +| ----------- | ---------------------------------------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `repo_id` | The database ID of the parent repository | `12` | `accession`, `archival_object`, `assessment`, `assessment_attribute_definition`, `classification`, `classification_term`, `custom_report_template`, `default_values`, `digital_object`, `digital_object_component`, `event`, `group`, `job`, `preference`, `required_fields`, `resource`, `rights_statement`, `top_container` | + +### Parent/child relationships + +Hierarchical relationships between other records are also expressed through foreign keys: + +| Column name | Description | Example | PK Tables | Found in | +| ---------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ | +| `root_record_id` | The database ID of the root parent record | `4566` | `resource`, `digital_object`, `classification` | `archival_object`, `digital_object_component`, `classification_term` | +| `parent_id` | The database ID of the immediate parent record. This is used to identify parent records which are of the same type as the child record (i.e. two archival object records). The value will be NULL if the only parent is the root record. | `1748121` | `archival_object`, `classification_term`, `digital_object_component` | `archival_object`, `classification_term`, `digital_object_component`, `note_persistent_id` | +| `parent_name` | The database ID or URI, and the record type of the immediate parent | `144@archival_object`, `root@/repositories/2/resources/2` | `resource`, `archival_object`, `classification`, `classification_term`, `digital_object`, `digital_object_component` | `archival_object`, `classification_term`, `digital_object_component` | + +Beginning with MySQL 8, you can recursively retrieve all parents of an archival object (or all archival objects linked to a resource) by running the following query: + +```sql +WITH RECURSIVE ao_path AS + (SELECT ao1.id + , ao1.display_string + , ao1.component_id + , ao1.parent_id + , ev.value as `ao_level` + , 1 as level + FROM archival_object ao1 + LEFT JOIN enumeration_value ev on ev.id = ao1.level_id + WHERE ao1.id = <your ao id> + <!-- to get all trees for a resource change to: WHERE ao1.root_record_id = <your root_record_id> --> + UNION ALL + SELECT ao2.id + , ao2.display_string + , ao2.component_id + , ao2.parent_id + , ev.value as `ao_level` + , ao_path.level + 1 as level + FROM ao_path + JOIN archival_object ao2 on ao_path.parent_id = ao2.id + LEFT JOIN enumeration_value ev on ev.id = ao2.level_id) + SELECT GROUP_CONCAT(CONCAT(display_string, ' ', ' (', CONCAT(UPPER(SUBSTRING(ao_level,1,1)),LOWER(SUBSTRING(ao_level,2))), ' ', IF(component_id is not NULL, CAST(component_id as CHAR), "N/A"), ')') ORDER BY level DESC SEPARATOR ' > ') as tree + FROM ao_path; + +``` + +To retrieve all children (MySQL 8+): + +To retrieve both parents and children (MySQL 8+): + +To retrieve all parents of a record in MySQL 5.7 and below, run the following query: + +```sql +SELECT (SELECT GROUP_CONCAT(CONCAT(display_string, ' (', ao_level, ')') SEPARATOR ' < ') as parent_path + FROM (SELECT T2.display_string as display_string + , ev.value as ao_level + FROM (SELECT @r AS _id + , @p := @r AS previous + , (SELECT @r := parent_id FROM archival_object WHERE id = _id) AS parent_id + , @l := @l + 1 AS lvl + FROM ((SELECT @r := 1749840, @p := 0, @l := 0) AS vars, + archival_object h) + WHERE @r <> 0 AND @r <> @p) AS T1 + JOIN archival_object T2 ON T1._id = T2.id + LEFT JOIN enumeration_value ev on ev.id = T2.level_id + WHERE T2.id != 1749840 + ORDER BY T1.lvl DESC) as all_parents) as p_path + , ao.display_string + , CONCAT('/repositories/', ao.repo_id, '/archival_objects/', ao.id) as uri +FROM archival_object ao +WHERE ao.id = 1749840 +``` + +To retrieve all children of a record (MysQL 5.7 and below): + +```sql + +``` + +### Sequencing + +The ordering of records in a `resource`, `classification`, or `digital_object` tree is determined by the `position` field. The position field is also used to order values in the `enumeration_value` and `assessment_attribute_definition` tables: + +| Column name | Description | Example | Found in | +| ----------- | -------------------------------------------------------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------- | +| `position` | The position of the archival object under the immediate parent | `168000` | `enumeration_value`, `assessment_attribute_definition`, `classification_term`, `digital_object_component`, `archival_object` | + +## Boolean fields + +Many records and subrecords include fields which contain integers (`0` or `1`) corresponding to boolean values. + +| Boolean fields | Description | Found in | +| -------------------- | ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `publish` | | `subnote_metadata`, `file_version`, `external_document`, `accession`, `classification`, `agent_person`, `agent_family`, `agent_software`, `agent_corporate_entity`, `classification_term`, `revision_statement`, `repository`, `note`, `digital_object`, `digital_object_component`, `archival_object`, `resource` | +| `suppressed` | | `accession`, `archival_object`, `assessment_reviewer_rlshp`, `assessment_rlshp`, `classification`, `classification_creator_rlshp`, `classification_rlshp`, `classification_term`, `classification_term_creator_rlshp`, `digital_object`, `digital_object_component`, `enumeration_value`, `event`, `event_link_rlshp`, `instance_do_link_rlshp`, `linked_agents_rlshp`, `location_profile_rlshp`, `owner_repo_rlshp`, `related_accession_rlshp`, `related_agents_rlshp`, `resource`, `spawned_rlshp`, `surveyed_by_rlshp`, `top_container_housed_at_rlshp`, `top_container_link_rlshp`, `top_container_profile_rlshp` | +| `restrictions_apply` | | `accession`, `archival_object` | + +<!-- NEED TO ADD the restriction field here - the resource and dig ob recs have it --> +<!-- also add the hidden field in repo and the multiple restrictions in accession --> +<!-- I think this is good to mention because these are editable via the API but also have their own endpoints. So they are a little different. Should also mention that they are bools in the API docs. --> + +## Read-Only Fields + +Several system generated, read-only fields appear across many tables. These include database identifiers, timestamps that track record creation and modification, and fields that record the username of the user that created and last modified the each record. + +| Most common read-only fields | Description | +| ------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `id` (primary key) | Database identifier for each record | +| `system_mtime` | The last time the record was modified by the system | +| `created_by` | The user that created a record | +| `last_modified_by` | The user that last modified a record | +| `user_mtime` | The time that a record was last modified by a user | +| `create_time` | The time that a record was created | +| `lock_version` | This field is incrementally updated each time a record is updated. This provides a method of tracking updates and managing near-simultaneous edits by different users. | +| `json_schema_version` | The JSON schema version | +| `aspace_relationship_position` | The position of a linked record in a list of other linked records | +| `is_slug_auto` | A boolean value that indicates whether a slug was auto-generated | +| `system_generated` | A boolean value that indicates whether a field was system-generated | +| `display_string` | A system-generated field which concatenates the title and date fields of an archival object record | + +**NOTE**: for subrecord tables these fields may hold unexpected data - because subrecords are deleted and recreated upon each save of a main or supporting record, their create and modification times will also be recreated and will not reflect the original creation date of the subrecord itself. For resource records, the timestamp only records the time that the resource itself was modified, not the last time any of its components were modified. + +<!-- ## Querying the ArchivesSpace Database --> diff --git a/src/content/docs/fr/architecture/directories.md b/src/content/docs/fr/architecture/directories.md new file mode 100644 index 0000000..8d1c026 --- /dev/null +++ b/src/content/docs/fr/architecture/directories.md @@ -0,0 +1,90 @@ +--- +title: Directory structure +description: Provides short summaries of the different directories in the ArchivesSpace codebase. +--- + +ArchivesSpace is made up of several components that are kept in separate directories. + +## \_yard + +This directory contains the code for the documentation tool used to generate the github io pages here: http://archivesspace.github.io/archivesspace/ + +## backend + +This directory contains the code that handles the database and the API. + +## build + +This directory contains the code used to build the application. It includes the commands that are used to run the development servers, the test suites, and to build the releases. ArchivesSpace is a JRuby application and Apache Ant is used to build it. + +## clustering + +This directory contains code that can be used when clustering an ArchivesSpace installation. + +## common + +This directory contains code that is used across two or more of the components. It includes configuration options, database schemas and migrations, and translation files. + +## contribution_files + +This directory contains the documentation and PDFs of the license agreement files. + +## docs + +This directory contains documentation files that are included in a release. + +## frontend + +This directory contains the staff interface Ruby on Rails application. + +## indexer + +This directory contains the indexer Sinatra application. + +## jmeter + +This directory contains an example that can be used to set up Apache JMeter to load test functional behavior and measure performance. + +## launcher + +This directory contains the code that launches (starts, restarts, and stops) an ArchivesSpace application. + +## oai + +This directory contains the OAI-PMH Sinatra application. + +## plugins + +This directory contains ArchivesSpace Program Team supported plugins. + +## proxy + +This directory contains the Docker proxy code. + +## public + +This directory contains the public interface Ruby on Rails application. + +## reports + +This directory contains the reports code. + +## scripts + +This directory contains scripts necessary for building, deploying, and other ArchivesSpace tasks. + +## selenium + +This directory contains the selenium tests. + +## solr + +This directory contains the solr code. + +## stylesheets + +This directory contains XSL stylesheets used by ArchivesSpace. + +## supervisord + +This directory contains a tool that can be used to run the development servers. diff --git a/src/content/docs/fr/architecture/frontend.md b/src/content/docs/fr/architecture/frontend.md new file mode 100644 index 0000000..50e9665 --- /dev/null +++ b/src/content/docs/fr/architecture/frontend.md @@ -0,0 +1,7 @@ +--- +title: Staff interface +--- + +This document provides an overview of the parts of the ArchivesSpace codebase which control the frontend/staff interface. For guidance on using the ArchivesSpace staff interface, consult the [ArchivesSpace Help Center](https://archivesspace.atlassian.net/wiki/spaces/ArchivesSpaceUserManual/overview) (ArchivesSpace members only). + +> Additional documentation needed diff --git a/src/content/docs/fr/architecture/index.md b/src/content/docs/fr/architecture/index.md new file mode 100644 index 0000000..786335d --- /dev/null +++ b/src/content/docs/fr/architecture/index.md @@ -0,0 +1,25 @@ +--- +title: Architecture and components +description: Abbreviated description of how the different parts of ArchivesSpace interact with each other with links to each section. +--- + +ArchivesSpace is divided into several components: the backend, which +exposes the major workflows and data types of the system via a +REST API, a staff interface, a public interface, and a search system, +consisting of Solr and an indexer application. + +These components interact by exchanging JSON data. The format of this +data is defined by a class called JSONModel. + +- [Overview](./overview) +- [JSONModel -- a validated ArchivesSpace record](./jsonmodel) +- [The ArchivesSpace backend](./backend) +- [The ArchivesSpace staff interface](./frontend) +- [Background Jobs](./jobs) +- [Search indexing](./search) +- [The ArchivesSpace public user interface](./public) +- [OAI-PMH interface](./oai-pmh) +- [API](./api) +- [Database](./database) +- [Directory structure](./directories) +- [Dependencies](./languages) diff --git a/src/content/docs/fr/architecture/jobs.md b/src/content/docs/fr/architecture/jobs.md new file mode 100644 index 0000000..5e2ef01 --- /dev/null +++ b/src/content/docs/fr/architecture/jobs.md @@ -0,0 +1,118 @@ +--- +title: Background jobs +description: Describes long running processes, called background jobs, in ArchivesSpace, as well as how they are structured using types, runners, and schemas. Additional guidance on setting jobs to run concurrently and how to add a new job type using a plugin. +--- + +ArchivesSpace provides a mechanism for long-running processes to run +asynchronously. These processes are called `Background Jobs`. + +## Managing Jobs in the Staff UI + +The `Create` menu has a `Background Job` option which shows a submenu of job +types that the current user has permission to create. (See below for more +information about job permissions and hidden jobs.) Selecting one of these +options will take the user to a form to enter any parameters required for the +job and then to create it. + +When a job is created it is placed in the `Background Job Queue`. Jobs in the +queue will be run in the order they were created. (See below for more +information about multiple threads and concurrent jobs.) + +The `Browse` menu has a `Background Jobs` option. This takes the user to a list +of jobs arranged by their status. The user can then view the details of a job, +and cancel it if they have permission. + +## Permissions + +A user must have the `create_job` permission to create a job. By default, this +permission is included in the `repository_basic_data_entry` group. + +A user must have the `cancel_job` permission to cancel a job. By default, this +permission is included in the `repository_managers` group. + +When a JobRunner registers it can specify additional create and cancel +permissions. (See below for more information) + +## Types, Runners and Schemas + +Each job has a type, and each type has a registered runner to run jobs of that +type and JSONModel schema to define its parameters. + +#### Registered JobRunners + +All jobs of a type are handled by a registered `JobRunner`. The job runner +classes are located here: + +``` +backend/app/lib/job_runners/ +``` + +It is possible to define additional job runners from a plugin. (See below for +more information about plugins.) + +A job runner class must subclass `JobRunner`, register to run one or more job +types, and implement a `#run` method for jobs that it handles. + +When a job runner registers for a job type, it can set some options: + +- `:hidden` + - Defaults to `false` + - If this is set then this job type will not be shown in the list of available job types. +- `:run_concurrently` + - Defaults to `false` + - If this is set to true then more than one job of this type can run at the same time. +- `:create_permissions` + - Defaults to `[]` + - A permission or list of permissions required, in addition to `create_job`, to create jobs of this type. +- `:cancel_permissions` + - Defaults to `[]` + - A permission or list of permissions required, in addition to `cancel_job`, to cancel jobs of this type. + +For more information about defining a job runner, see the `JobRunner` superclass: + +``` +backend/app/lib/job_runner.rb +``` + +#### JSONModel Schemas + +A job type also requires a JSONModel schema that defines the parameters to run a +job of the type. The schema name must be the same as the type that the runner +registers for. For example: + +``` +common/schemas/import_job.rb +``` + +This schema, for `JSONModel(:import_job)`, defines the parameters for running a +job of type `import_job`. + +## Concurrency + +ArchivesSpace can be configured to run more than one background job at a time. +By default, there will be two threads available to run background jobs. +The configuration looks like this: + +``` +AppConfig[:job_thread_count] = 2 +``` + +The `BackgroundJobQueue` will start this number of threads at start up. Those +threads will then poll for queued jobs and run them. + +When a job runner registers, it can set an option called `:run_concurrently`. +This is `false` by default. When set to `false` a job thread will not run a job +if there is already a job of that type running. The job will remain on the queue +and will be run when there are no longer any jobs of its type running. + +When set to `true` a job will be run when it comes to the front of the queue +regardless of whether there is a job of the same type running. + +## Plugins + +It is possible to add a new job type from a plugin. ArchivesSpace includes a +plugin that demonstrates how to do this: + +``` +plugins/jobs_example +``` diff --git a/src/content/docs/fr/architecture/jsonmodel.md b/src/content/docs/fr/architecture/jsonmodel.md new file mode 100644 index 0000000..9002c8b --- /dev/null +++ b/src/content/docs/fr/architecture/jsonmodel.md @@ -0,0 +1,103 @@ +--- +title: JSONModel +description: Describes the rules and structure behind the JSONModel class, which expresses the rules for different types of archival records. JSONModel instances are the primary data interchange mechanism for ArchivesSpace. +--- + +The ArchivesSpace system is concerned with managing a number of +different archival record types. Each record can be expressed as a +set of nested key/value pairs, and associated with each record type is +a number of rules that describe what it means for a record of that +type to be valid: + +- some fields are mandatory, some optional +- some fields can only take certain types +- some fields can only take values from a constrained set +- some fields are dependent on other fields +- some record types can be nested within other record types +- some record types may be related to others through a hierarchy +- some record types form a relationship graph with other record + types + +The JSONModel class provides a common language for expressing these +rules that all parts of the application can share. There is a +JSONModel class instance for each type of record in the system, so: + +```ruby +JSONModel(:digital_object) +``` + +is a class that knows how to take a hash of properties and make sure +those properties conform to the specification of a Digital Object: + +```ruby +JSONModel(:digital_object).from_hash(myhash) +``` + +If it passes validation, a new JSONModel(:digital_object) instance is +returned, which provides accessors for accessing its values, and +facilities for round-tripping between JSON documents and regular Ruby +hashes: + +```ruby +obj = JSONModel(:digital_object).from_hash(myhash) + +obj.title # or obj['title'] +obj.title = 'a new title' # or obj['title'] = 'a new title' + +obj.\_exceptions # Validates the object and reports any issues + +obj.to_hash # Turn the JSONModel object back into a regular hash +obj.to_json # Serialize the JSONModel object into JSON +``` + +Much of the validation performed by JSONModel is provided by the JSON +schema definitions listed in the `common/schemas` directory. JSON +schemas provide a standard way of declaring which properties a record +may and may not contain, along with their types and other +restrictions. ArchivesSpace uses these schemas to capture the +validation rules defining each record type in a declarative and +relatively self-documenting fashion. + +JSONModel instances are the primary data interchange mechanism for the +ArchivesSpace system: the API consumes and produces JSONModel +instances (in JSON format), and much of the user interface's life is +spent turning forms into JSONModel instances and shipping them off to +the backend. + +## JSONModel::Client -- A high-level API for interacting with the ArchivesSpace backend + +To save the need for a lot of HTTP request wrangling, ArchivesSpace +ships with a module called JSONModel::Client that simplifies the +common CRUD-style operations. Including this module just requires +passing an additional parameter when initializing JSONModel: + +```ruby +JSONModel::init(:client_mode => true, :url => @backend_url) +include JSONModel +``` + +If you'll be working against a single repository, it's convenient to +set it as the default for subsequent actions: + +```ruby +JSONModel.set_repository(123) +``` + +Then, several additional JSONModel methods are available: + +```ruby +# As before, get a paginated list of accessions (GET) +JSONModel(:accession).all(:page => 1) + +# Create a new accession (POST) +obj = JSONModel(:accession).from_hash(:title => "A new accession", ...) +obj.save + +# Get a single accession by ID (GET) +obj = JSONModel(:accession).find(123) + +# Update an existing accession (POST) +obj = JSONModel(:accession).find(123) +obj.title = "Updated title" +obj.save +``` diff --git a/src/content/docs/fr/architecture/languages.md b/src/content/docs/fr/architecture/languages.md new file mode 100644 index 0000000..e36d138 --- /dev/null +++ b/src/content/docs/fr/architecture/languages.md @@ -0,0 +1,18 @@ +--- +title: Dependencies +description: Lists the technical stack of the application, including programming languages and platforms. +--- + +ArchivesSpace components are constructed using several programming languages, platforms, and additional open source projects. + +## Languages + +The languages used are Java, JRuby, Ruby, JavaScript, and CSS. + +## Platforms + +The backend, OAI harvester, and indexer are Sinatra apps. The staff and public user interfaces are Ruby on Rails apps. + +## Additional open source projects + +The database used out of the box and for testing is Apache Derby. The database suggested for production is MySQL. The index platform is Apache Solr. diff --git a/src/content/docs/fr/architecture/oai-pmh.md b/src/content/docs/fr/architecture/oai-pmh.md new file mode 100644 index 0000000..b538aa3 --- /dev/null +++ b/src/content/docs/fr/architecture/oai-pmh.md @@ -0,0 +1,130 @@ +--- +title: OAI-PMH interface +description: Describes how OAI-PMH is set up in ArchivesSpace and how to harvest data using OAI-PMH with example links and additional information. +--- + +A starter OAI-PMH interface for ArchivesSpace allowing other systems to harvest +your records is included in version 2.1.0. Additional features and functionality +will be added in later releases. + +By default, the OAI-PMH interface runs on port 8082. A sample request page is +available at http://localhost:8082/sample. (To access it, make sure that you +have set the AppConfig[:oai_proxy_url] appropriately.) + +The system provides responses to a number of standard OAI-PMH requests, +including GetRecord, Identify, ListIdentifiers, ListMetadataFormats, +ListRecords, and ListSets. Unpublished and suppressed records and elements are +not included in any of the OAI-PMH responses. + +Some responses require the URL parameter metadataPrefix. There are five +different metadata responses available: + +- EAD -- oai_ead (resources in EAD) +- Dublin Core -- oai_dc (archival objects and resources in Dublin Core) +- extended DCMI Terms -- oai_dcterms (archival objects and resources in DCMI Metadata Terms format) +- MARC -- oai_marc (archival objects and resources in MARC) +- MODS -- oai_mods (archival objects and resources in MODS) + +The EAD response for resources and MARC response for resources and archival +objects use the mappings from the built-in exporter for resources. The DC, +DCMI terms, and MODS responses for resources and archival objects use mappings +suggested by the community. + +Here are some example URLs and other information for these requests: + +**GetRecord** – needs a record identifier and metadataPrefix +Up to ArchivesSpace v3.5.1 OAI identifiers are in this format: + +`http://localhost:8082/oai?verb=GetRecord&identifier=oai:archivesspace//repositories/2/resources/138&metadataPrefix=oai_ead` + +Starting with ArchivesSpace v4.0.0 OAI identifiers are in the new format (notice the colon after the `oai:archivesspace` namespace part of the identifier): + +`http://localhost:8082/oai?verb=GetRecord&identifier=oai:archivesspace:/repositories/2/resources/138&metadataPrefix=oai_ead` + +see also: https://github.com/code4lib/ruby-oai/releases/tag/v1.0.0 + +**Identify** + +`http://localhost:8082/oai?verb=Identify` + +**ListIdentifiers** – needs a metadataPrefix + +`http://localhost:8082/oai?verb=ListIdentifiers&metadataPrefix=oai_dc` + +**ListMetadataFormats** + +`http://localhost:8082/oai?verb=ListMetadataFormats` + +**ListRecords** – needs a metadataPrefix + +`http://localhost:8082/oai?verb=ListRecords&metadataPrefix=oai_dcterms` + +**ListSets** + +`http://localhost:8082/oai?verb=ListSets` + +Harvesting the ArchivesSpace OAI-PMH server without specifying a set will yield +all published records across all repositories. +Predefined sets can be accessed using the set parameter. In order to retrieve +records from sets, include a set parameter in the URL and the DC metadataPrefix, +such as "&set=collection&metadataPrefix=oai_dc". These sets can be from +configured sets as shown above or from the following levels of description: + +- Class -- class +- Collection -- collection +- File -- file +- Fonds -- fonds +- Item -- item +- Other_Level -- otherlevel +- Record_Group -- recordgrp +- Series -- series +- Sub-Fonds -- subfonds +- Sub-Group -- subgrp +- Sub-Series -- subseries + +In addition to the sets based on level of description, you can define sets +based on repository codes and/or sponsors in the config/config.rb file: + +```ruby +AppConfig[:oai_sets] = { + 'repository_set' => { + :repo_codes => ['hello626'], + :description => "A set of one or more repositories", + }, + 'sponsor_set' => { + :sponsors => ['The_Sponsor'], + :description => "A set of one or more sponsors", + } +} +``` + +The interface implements resumption tokens for pagination of results. As an +example, the following URL format should be used to page through the results +from a ListRecords request: + +`http://localhost:8082/oai?verb=ListRecords&metadataPrefix=oai_ead` + +using the resumption token: + +`http://localhost:8082/oai?verb=ListRecords&resumptionToken=eyJtZXRhZGF0YV9wcmVmaXgiOiJvYWlfZWFkIiwiZnJvbSI6IjE5NzAtMDEtMDEgMDA6MDA6MDAgVVRDIiwidW50aWwiOiIyMDE3LTA3LTA2IDE3OjEwOjQxIFVUQyIsInN0YXRlIjoicHJvZHVjaW5nX3JlY29yZHMiLCJsYXN0X2RlbGV0ZV9pZCI6MCwicmVtYWluaW5nX3R5cGVzIjp7IlJlc291cmNlIjoxfSwiaXNzdWVfdGltZSI6MTQ5OTM2MTA0Mjc0OX0=` + +Note: you do not use the metadataPrefix when you use the resumptionToken + +The ArchivesSpace OAI-PMH server supports persistent deletes, so harvesters +will be notified of any records that were deleted since +they last harvested. + +Mixed content is removed from Dublin Core, dcterms, MARC, and MODS field outputs +in the OAI-PMH response (e.g., a scope note mapped to a DC description field +would not include `<p>`, `<abbr>`, `<address>`, `<archref>`, `<bibref>`, `<blockquote>`, +`<chronlist>`, `<corpname>`, `<date>`, `<emph>`, `<expan>`, `<extptr>`, `<extref>`, +`<famname>`, `<function>`, `<genreform>`, `<geogname>`, `<lb>`, `<linkgrp>`, `<list>`, +`<name>`, `<note>`, `<num>`, `<occupation>`, `<origination>`, `<persname>`, `<ptr>`, `<ref>`, `<repository>`, `<subject>`, `<table>`, `<title>`, `<unitdate>`, `<unittitle>`). + +The component level records include inherited data from superior hierarchical +levels of the finding aid. Element inheritance is determined by institutional +system configuration (editable in the config/config.rb file) as implemented for +the Public User Interface. + +ARKs have not yet been implemented, pending more discussion of how they should +be formulated. diff --git a/src/content/docs/fr/architecture/overview.md b/src/content/docs/fr/architecture/overview.md new file mode 100644 index 0000000..b4a7375 --- /dev/null +++ b/src/content/docs/fr/architecture/overview.md @@ -0,0 +1,15 @@ +--- +title: Architecture Overview +description: The main components of ArchivesSpace and how they interact with each other and the end users. +--- + +ArchivesSpace is divided into several components: + +- the backend, which exposes the major workflows and data types of the system via a REST API, +- a staff interface, +- a public interface, +- a search system, consisting of Solr and an indexer application. + +These components interact by exchanging JSON data. The format of this data is defined by a class called JSONModel. + +![archivesspace_architecture](./archivesspace_architecture.svg) diff --git a/src/content/docs/fr/architecture/public.md b/src/content/docs/fr/architecture/public.md new file mode 100644 index 0000000..aa6419d --- /dev/null +++ b/src/content/docs/fr/architecture/public.md @@ -0,0 +1,154 @@ +--- +title: Public user interface +description: Directions for configuration options for the ArchivesSpace Public User Interface, as well as explanation on inheritance of data in records. +--- + +The ArchivesSpace Public User Interface (PUI) provides a public +interface to your ArchivesSpace collections. In a default +ArchivesSpace installation it runs on port `:8081`. + +## Configuration + +The PUI is configured using the standard ArchivesSpace `config.rb` +file, with the relevant configuration options are prefixed with +`:pui_`. + +To see the full list of available options, see the file +[`https://github.com/archivesspace/archivesspace/blob/master/common/config/config-defaults.rb`](https://github.com/archivesspace/archivesspace/blob/master/common/config/config-defaults.rb) + +### Preserving Patron Privacy + +The **:block_referrer** key in the configuration file (default: **true**) determines whether the full referring URL is +transmitted when the user clicks a link to a website outside the web domain of your instance of ArchivesSpace. This +protects your patrons from tracking by that site. + +### Main Navigation Menu + +You can choose not to display one or more of the links on the main +(horizontal) navigation menu, either globally or by repository, if you +have more than one repository. You manage this through the +`:pui_hide` options in the `config/config.rb` file. + +### Repository Customization + +#### Display of "badges" on the Repository page + +You can configure which badges appear on the Repository page, both +globally or by repository. See the `:pui_hide` configuration options +for these too. + +### Activation of the "Request" button on archival object pages + +You can configure, both globally or by repository, whether the +"Request" button is active on archival object pages for objects that +don't have an associated Top Container. See the +`:pui_requests_permitted_for_containers_only` configuration option to +modify this. + +### I18n + +You can change the text and labels used by the PUI by editing the +locale files under the `locales/public` directory of your +ArchivesSpace distribution. + +### Addition of a "lead paragraph" + +You can also use the custom `.yml` files, described above, to add a +custom "lead paragraph" (including html markup) for one or more of +your repositories, keyed to the repository's code. + +For example, if your repository, `My Wonderful Repository` has a code of `MWR`, this is what you might see in the +custom `en.yml`: + +```yaml +en: + repos: + mwr: + lead_graph: This <strong>amazing</strong> repository has so much to offer you! +``` + +## Development + +To run a development server, the PUI follows the same pattern as the rest of ArchivesSpace. From your ArchivesSpace checkout: + +```shell + # Prepare all dependencies + build/run bootstrap + + # Run the backend development server (and Solr) + build/run backend:devserver + + # Run the indexer + build/run indexer + + # Finally, run the PUI itself + build/run public:devserver +``` + +## Inheritance + +### Three options for inheritance: + +- Directly inherit a value for a field – the record has no value for the field and you want the value in the field to display as if it were the record’s own [uncomment the inheritance section in the config, set desired field (property) to inherit_directly => true] +- Indirectly inherit a value for a field – the record has no value for the field and you want to display the value from a higher level, but precede it with a note that indicates that it comes from that higher level, such as "From the collection" [uncomment the inheritance section in the config, set desired field (property) to inherit_directly => false] +- Don’t display the field at all – the record has no value of its own for the field and you don’t want it to display at all [uncomment the inheritance section in the config, delete the lines for the desired field (property)] + +### Archival Inheritance + +With the new version of the Public Interface, all elements of description can be inherited. This is especially important since the PUI displays each level of description as its own webpage. + +Each element of description can be inherited either directly or indirectly. When an element is inherited directly, it will appear as if that element was attached directly to that archival object in the staff interface. When an element is inherited indirectly, it will appear on the lower-level of description in the public interface, but the inherited element will be preceded with a note indicating the level of the ancestor from which the note is inherited (e.g. From the Collection, or From the Sub-Series). In both cases, the element will only be inherited if it is missing from the archival object. Additionally, the element of description will only be inherited from the closest ancestor. In other words, if a top-level collection record has an access restrictions note, and a child-level series record has an an access restrictions note, but the sub-series child of that series record lacks an access restrictions note, then the sub-series record will inherit only the access restrictions note from its parent series record. + +Additionally, the identifier element in ArchivesSpace, which is better known as the Reference Code in ISAD-G and DACS, can be inherited in a composite manner. When inherited in a composite manner, the inherited elements will be concatenated together. In other words, an identifier at the item level could look like this: MSS 1. Series A. Item 1. Though the archival object has an identifier of "Item 1", a composite identifier is displayed since the series-level record to which the item belongs has an identifier of "Series A", which in turn also belongs to a collection-level record that has an identifier of "MSS 1". + +By default, the following elements are turned on for inheritance: + +- Title (direct inheritance) +- Identifier (indirect inheritance), but by default the identifier inherits from ancestor archival objects only; it does NOT include the resource identifier. + +Also, it is advised to inherit this element in a composite fashion once it is determined whether the level of description should or should not display as part of the identifier, which will depend upon local data-entry practices + +- Language code (direct inheritance, but it does NOT display anywhere in the interface currently; eventually, this could be used for faceting) +- Dates (direct inheritance) +- Extents (indirect inheritance) +- Creator (indirect inheritance) +- Access restrictions note (direct inheritance) +- Scope and contents note (indirect inheritance) +- Language of Materials note (indirect inheritance, but there seems to be a bug right now so that the Language notes always show up as being directly inherited. See AR-XXXX) + +See https://github.com/archivesspace/archivesspace/blob/master/common/config/config-defaults.rb#L296-L396 for more information and examples. + +Also, a video overview of this feature, which was recorded before development was finished, is available online: +https://vimeo.com/195457286 + +Composite Identifier Inheritance + +If you add the following three lines to your configuration file, re-start ArchivesSpace, and then let the indexer re-index your records, you can gain the benefit of composite identifiers: + +```ruby +AppConfig[:record_inheritance][:archival_object][:composite_identifiers] = { +:include_level => true, +:identifier_delimiter => '. ' +} +``` + +To add extra fields, such as subjects you can add the following: + +```ruby +inherited_fields_extras = [ + { + code: 'subjects', + property: 'subjects', + inherit_if: proc { |json| json.select { |j| true } }, + inherit_directly: false, + }, +] +``` + +When you set include_level to true, that means the archival object level will be included in the identifier so that you don't have to repeat that data. For example, if the level of description is "Series" and the archival object identifier is "1", and the parent resource identifier is "MSS 1", then the composite identifier would display as "MSS 1. Series 1" at the series 1 level, and any descendant record. If you set include_level to false, then the display would be "MSS 1. 1" + +### License + +ArchivesSpace is released under the [Educational Community License, +version 2.0](http://opensource.org/licenses/ecl2.php). See the +[COPYING](https://github.com/archivesspace/archivesspace/blob/master/COPYING) file for more information. diff --git a/src/content/docs/fr/architecture/search.md b/src/content/docs/fr/architecture/search.md new file mode 100644 index 0000000..6320831 --- /dev/null +++ b/src/content/docs/fr/architecture/search.md @@ -0,0 +1,46 @@ +--- +title: Search indexing +description: Explanation of how ArchivesSpace uses Solr for indexing added/updated/deleted records and the differences between the periodic and real-time modes of indexing records. +--- + +The ArchivesSpace system uses Solr for its full-text search. As +records are added/updated/deleted by the backend, the corresponding +changes are made to the Solr index to keep them (roughly) +synchronized. + +Keeping the backend and Solr in sync is the job of the "indexer", a +separate process that runs in the background and watches for record +updates. The indexer operates in two modes simultaneously: + +- The periodic mode polls the backend to get a list of records that + were added/modified/deleted since it last checked. These changes + are propagated to the Solr index. This generally happens every 30 + to 60 seconds (and is configurable). +- The real-time mode responds to updates as they happen, applying + changes to Solr as soon as they're applied to the backend. This + aims to reflect updates within the search indexes in milliseconds + or seconds. + +The two modes of operation overlap somewhat, but they serve different +purposes. The periodic mode ensures that records are never missed due +to transient failures, and will bring the indexes up to date even if +the indexer hasn't run for quite some time--even creating them from +scratch if necessary. This mode is also used for indexing updates +made by bulk import processes and other updates that don't need to be +reflected in the indexes immediately. + +The real-time indexer mode attempts to apply updates to the index much +more quickly. Rather than polling, it performs a `GET` request +against the `/update-feed` endpoint of the backend. This endpoint +returns any records that were updated since the last time it was asked +and, most importantly, it leaves the request hanging if no records +have changed. + +By calling this endpoint in a loop, the real-time indexer spends most +of its time sitting around waiting for something to happen. The +moment a record is updated, the already-pending request to the +`/update-feed` endpoint yields the updated record, which is sent to +Solr and indexed immediately. This avoids the delays associated with +polling and keeps indexing latency low where it matters. For example, +newly created records should appear in the browse list by the time a +user views it. diff --git a/src/content/docs/fr/customization/authentication.md b/src/content/docs/fr/customization/authentication.md new file mode 100644 index 0000000..e68959a --- /dev/null +++ b/src/content/docs/fr/customization/authentication.md @@ -0,0 +1,139 @@ +--- +title: Additional authentication +description: Instructions on how to install and configure a custom authentication handler via a plugin. +--- + +ArchivesSpace supports LDAP-based authentication out of the box, but you can +authenticate against other password-based user directories by defining your own +authentication handler, creating a plug-in, and configuring your ArchivesSpace +instance to use it. If you would rather not have to create your own handler, +there is a [plugin](https://github.com/lyrasis/aspace-oauth) available that uses OAUTH user authentication that you can add +to your ArchivesSpace installation. + +## Creating a new authentication handler class to use in a plug-in + +An authentication handler is just a class that implements a couple of +key methods: + +- `initialize(opts)` -- An object constructor which receives the + configuration block specified in the system's configuration. +- `name` -- A zero-argument method which just returns a string that + identifies the instance of your handler. The format of this + string isn't important: it just gets stored as a user attribute + (in the ArchivesSpace database) to make it possible to tell which + authentication source a user last successfully authenticated + against. +- `authenticate(username, password)` -- a method which checks + whether `password` is the correct password for `username`. If the + password is correct, returns an instance of `JSONModel(:user)`. + Otherwise, returns `nil`. + +A new instance of your handler will be created for each login attempt, +so there's no need to handle concurrency in your implementation. + +Your `authenticate` method can do whatever is required to check that +the provided password is correct, with the only constraint being that +it must return either `nil` or a `JSONModel(:user)` instance. + +The `JSONModel(:user)` class (whose JSON schema is defined in +`common/schemas/user.rb`) defines the set of properties that the +system needs for a user. When you return a `JSONModel(:user)` object, +its values will be used to create an ArchivesSpace user (if a user by +that name didn't exist already), or update the existing user (if they +were already known). + +**Note**: `The JSONModel(:user)` class validates the values you give it +against its JSON schema and throws an `JSONModel::ValidationException` +if anything isn't right. If this happens within your handler, the +exception will be logged and the authentication request will fail. + +### A skeleton implementation + +Suppose you already have a database with a table containing users that +should be able to log in to ArchivesSpace. Below is a sketch of an +authentication handler that will connect to this database and use it +for authentication. + +```ruby +# For this example we'll use the Sequel database toolkit. Note that +# this isn't necessary--you could use whatever database library you +# like here. +require 'sequel' + +class MyDatabaseAuth + + # For easy access to the JSONModel(:user) class + include JSONModel + + + def initialize(definition) + # Store the database connection details for use at + # authentication time. + @db_url = definition[:db_url] or raise "Need a value for :db_url" + end + + + # Just for informational purposes. Return a string containing our + # database URL. + def name + "MyDatabaseAuth - #{@db_url}" + end + + + def authenticate(username, password) + # Open a connection to the database + Sequel.connect(@db_url) do |db| + + # Check whether we have an entry for the given username + # and password in our database's "users" table + user = db[:users].filter(:username => username, + :password => password). + first + + if !user + # The user couldn't be found, or their password was wrong. + # Authentication failed. + return nil + end + + # Build and return a JSONModel(:user) instance from fields in the database + JSONModel(:user).from_hash(:username => username, + :name => user[:user_full_name]) + + end + end + +end +``` + +In order to use your new authentication handler, you'll need to add it to the plug-in +architecture in ArchivesSpace and enable it. Create a new directory, called our_auth +perhaps, in the plugins directory of your ArchivesSpace installation. Inside +that directory create this directory hierarchy `backend/model/` and place the +new class file there. Next, configure the new handler. + +## Modifying your configuration + +To have ArchivesSpace invoke your new authentication handler, just add +a new entry to the `:authentication_sources` configuration block in the +`config/config.rb` file. + +A configuration for the above example might be as follows: + +```ruby +AppConfig[:authentication_sources] = [{ + :model => 'MyDatabaseAuth', + :db_url => 'jdbc:mysql://localhost:3306/somedb?user=myuser&password=mypassword', + }] +``` + +## Add the plug-in to the list of plug-ins already enabled + +In the `config/config.rb` file, find the setting of AppConfig[:plugins] and add +a reference to the new plug-in there. For example, if you named it our_auth, the +AppConfig[:plugins] setting may look something like this: + +AppConfig[:plugins] = ['local', 'hello_world', 'our_auth'] + +Restart your ArchivesSpace installation and you should now see authentication +requests hitting your new handler. diff --git a/src/content/docs/fr/customization/bower.md b/src/content/docs/fr/customization/bower.md new file mode 100644 index 0000000..1197f7f --- /dev/null +++ b/src/content/docs/fr/customization/bower.md @@ -0,0 +1,68 @@ +--- +title: Managing frontend assets with Bower +description: Instructions on how to add static assests to the core project. +--- + +This is aimed at developers and applies to the 'frontend' application only. + +If you wish to add static assets to the core project (i.e., javascript, css, +less files) please use `bower` to add and install them so we know what's what +and when to upgrade. + +If you wish to do a good deed for ArchivesSpace you can track down the source +of any vendor assets not included in bower.json and get them updated and +installed according to this protocol. + +## General Setup + +### Step 1: install npm + +On OSX, for example: + +```shell +brew install npm +``` + +### Step 2: install Bower + +```shell +npm install bower -g +``` + +### Step 3: install components + +```shell +bower install +``` + +## Adding a static asset to ASpace Frontend (Staff UI) + +### Step 1: add the component + +```shell +bower install <PACKAGE NAME> --save +``` + +### Step 2: map Bower > Rails + + Edit the bower.json file to map the assets you want from bower_components + to assets. See examples in bower.json + This is kind of a hack to workaround: + https://github.com/blittle/bower-installer/issues/75 + +### Step 3: Install assets + +```shell +alias npm-exec='PATH=$(npm bin):$PATH' +npm-exec bower-installer +``` + +### Step 4: Check assets in + +Check the installed assets into Git. We version control bower.json and the +installed files, but not the bower_components directory. + +### Production! + +Don't forget - if you are adding assets that don't have a .js extension, you +need to add them to frontend/config/environments/production.rb diff --git a/src/content/docs/fr/customization/configuration.md b/src/content/docs/fr/customization/configuration.md new file mode 100644 index 0000000..ef98c89 --- /dev/null +++ b/src/content/docs/fr/customization/configuration.md @@ -0,0 +1,1249 @@ +--- +title: Configuration +description: Lists all available configuration options available within the config/config.rb file, including configuration names, values, and suggestions for setup. +--- + +The primary configuration for ArchivesSpace is done in the config/config.rb +file. By default, this file contains the default settings, which are indicated +by commented out lines ( indicated by the "#" in the file ). You can adjust these +settings by adding new lines that change the default and restarting +ArchivesSpace. Be sure that your new settings are not commented out +( i.e. do NOT start with a "#" ), otherwise the settings will not take effect. + +## Commonly changed settings + +### Database config + +#### :db_url + +Set your database name and credentials. The default specifies that the embedded database should be used. +It is recommended to use a MySQL database instead of the embedded database. +For more info, see [Using MySQL](/provisioning/mysql) + +This is an example of specifying MySQL credentials: + +`AppConfig[:db_url] = "jdbc:mysql://127.0.0.1:3306/aspace?useUnicode=true&characterEncoding=UTF-8&user=as&password=as123"` + +#### :db_max_connections + +Set the maximum number of database connections used by the application. +Default is derived from the number of indexer threads. + +`AppConfig[:db_max_connections] = proc { 20 + (AppConfig[:indexer_thread_count] * 2) }` + +### URLs for ArchivesSpace components + +Set the ArchivesSpace backend port. The backend listens on port 8089 by default. + +`AppConfig[:backend_url] = "http://localhost:8089"` + +Set the ArchivesSpace staff interface (frontend) port. The staff interface listens on port 8080 by default. + +`AppConfig[:frontend_url] = "http://localhost:8080"` + +Set the ArchivesSpace public interface port. The public interface listens on port 8081 by default. + +`AppConfig[:public_url] = "http://localhost:8081"` + +Set the ArchivesSpace OAI server port. The OAI server listens on port 8082 by default. + +`AppConfig[:oai_url] = "http://localhost:8082"` + +Set the ArchivesSpace Solr index port. The Solr server listens on port 8090 by default. + +`AppConfig[:solr_url] = "http://localhost:8090"` + +Set the ArchivesSpace indexer port. The indexer listens on port 8091 by default. + +`AppConfig[:indexer_url] = "http://localhost:8091"` + +Set the ArchivesSpace API documentation port. The API documentation listens on port 8888 by default. + +`AppConfig[:docs_url] = "http://localhost:8888"` + +### Enabling ArchivesSpace components + +Enable or disable specific componenets by setting the following settings to true or false (defaults to true): + +```ruby +AppConfig[:enable_backend] = true +AppConfig[:enable_frontend] = true +AppConfig[:enable_public] = true +AppConfig[:enable_solr] = true +AppConfig[:enable_indexer] = true +AppConfig[:enable_docs] = true +AppConfig[:enable_oai] = true +``` + +### Application logging + +By default, all logging will be output on the screen while the archivesspace command +is running. When running as a daemon/service, this is put into a file in +`logs/archivesspace.out`. You can route log output to a different file per component by changing the log value to +a filepath that archivesspace has write access to. + +You can also set the logging level for each component. Valid values are: + +- `debug` (everything) +- `info` +- `warn` +- `error` +- `fatal` (severe only) + +#### `AppConfig[:frontend_log]` + +File for log output for the frontend (staff interface). Set to "default" to +route log output to archivesspace.out. + +#### `#AppConfig[:frontend_log_level]` + +Logging level for the frontend. + +#### `AppConfig[:backend_log]` + +File for log output for the backend. Set to "default" to +route log output to archivesspace.out. + +#### `#AppConfig[:backend_log_level]` + +Logging level for the backend. + +#### `AppConfig[:pui_log]` + +File for log output for the public UI. Set to "default" to +route log output to archivesspace.out. + +#### `#AppConfig[:pui_log_level]` + +Logging level for the public UI. + +#### `AppConfig[:indexer_log]` + +File for log output for the indexer. Set to "default" to +route log output to archivesspace.out. + +#### `#AppConfig[:indexer_log_level]` + +Logging level for the indexer. + +### Database logging + +#### `AppConfig[:db_debug_log]` + +Set to true to log all SQL statements. +Note that this will have a performance impact! + +`AppConfig[:db_debug_log] = false` + +#### `AppConfig[:mysql_binlog]` + +Set to true if you have enabled MySQL binary logging. + +`AppConfig[:mysql_binlog] = false` + +### Solr backups + +#### `AppConfig[:solr_backup_schedule]` + +Set Solr back up schedule. By default, Solr backups will run at midnight. See https://crontab.guru/ for +information about the schedule syntax. + +`AppConfig[:solr_backup_schedule] = "0 * * * *"` + +#### `AppConfig[:solr_backup_number_to_keep]` + +Number of Solr backups to keep (default = 1) + +`AppConfig[:solr_backup_number_to_keep] = 1` + +#### `AppConfig[:solr_backup_directory]` + +Directory to store Solr backups. + +`AppConfig[:solr_backup_directory] = proc { File.join(AppConfig[:data_directory], "solr_backups") }` + +### Default Solr params + +#### `AppConfig[:solr_params]` + +Add default solr params. + +A simple example: use AND for search: + +`AppConfig[:solr_params] = { "q.op" => "AND" }` + +A more complex example: set the boost query value (bq) to boost the relevancy +for the query string in the title, set the phrase fields parameter (pf) to boost +the relevancy for the title when the query terms are in close proximity to each +other, and set the phrase slop (ps) parameter for the pf parameter to indicate +how close the proximity should be: + +```ruby +AppConfig[:solr_params] = { + "bq" => proc { "title:\"#{@query_string}\"*" }, + "pf" => 'title^10', + "ps" => 0, +} +``` + +### Language + +#### `AppConfig[:locale]` + +Set the application's language (see the .yml files in +https://github.com/archivesspace/archivesspace/tree/master/common/locales +for a list of available locale codes). Default is English (:en): + +`AppConfig[:locale] = :en` + +### Plugin registration + +#### `AppConfig[:plugins]` + +Plug-ins to load. They will load in the order specified. + +`AppConfig[:plugins] = ['local', 'lcnaf']` + +### Thread count + +#### `AppConfig[:job_thread_count]` + +The number of concurrent threads available to run background jobs. +Introduced because long running jobs were blocking the queue. +Resist the urge to set this to a big number! + +`AppConfig[:job_thread_count] = 2` + +### OAI configuration options + +**NOTE: As of version 2.5.2, the following parameters (oai_repository_name, oai_record_prefix, and oai_admin_email) have been deprecated. They should be set in the Staff User Interface. To set them, select the System menu in the Staff User Interface and then select Manage OAI-PMH Settings. These three settings are at the top of the page in the General Settings section. These settings will be completely removed from the config file when version 2.6.0 is released.** + +#### `AppConfig[:oai_repository_name]` + +`AppConfig[:oai_repository_name] = 'ArchivesSpace OAI Provider'` + +#### `AppConfig[:oai_record_prefix]` + +`AppConfig[:oai_record_prefix] = 'oai:archivesspace'` + +#### `AppConfig[:oai_admin_email]` + +`AppConfig[:oai_admin_email] = 'admin@example.com'` + +#### `AppConfig[:oai_sets]` + +In addition to the sets based on level of description, you can define OAI Sets +based on repository codes and/or sponsors as follows: + +```ruby +AppConfig[:oai_sets] = { + 'repository_set' => { + :repo_codes => ['hello626'], + :description => "A set of one or more repositories", + }, + + 'sponsor_set' => { + :sponsors => ['The_Sponsor'], + :description => "A set of one or more sponsors", + }, +} +``` + +## Other less commonly changed settings + +### Default admin password + +#### `AppConfig[:default_admin_password]` + +Set default admin password. Default password is "admin". + +`#AppConfig[:default_admin_password] = "admin"` + +### Data directories + +#### `AppConfig[:data_directory]` + +If you run ArchivesSpace using the standard scripts (archivesspace.sh, +archivesspace.bat or as a Windows service), the value of :data_directory is +automatically set to be the "data" directory of your ArchivesSpace +distribution. You don't need to change this value unless you specifically +want ArchivesSpace to put its data files elsewhere. + +`AppConfig[:data_directory] = File.join(Dir.home, "ArchivesSpace")` + +#### `AppConfig[:backup_directory]` + +Directory to store automated backups when using the embedded demo database (Apache Derby instead of MySQL). This defaults to `demo_db_backups` within the `data` directory. + +`AppConfig[:backup_directory] = proc { File.join(AppConfig[:data_directory], "demo_db_backups") }` + +### Solr defaults + +#### `AppConfig[:solr_indexing_frequency_seconds]` + +The number of seconds between each run of the SUI and PUI indexers. The indexers will perform and indexing cycle every configured number of seconds. + +`AppConfig[:solr_indexing_frequency_seconds] = 30` + +#### `AppConfig[:solr_facet_limit]` + +The maximum number of distinct facet terms Solr will include in the response for a given field. + +`AppConfig[:solr_facet_limit] = 100` + +#### `AppConfig[:default_page_size]` + +The number of records included in each page in all paginated backend api responses. +`AppConfig[:default_page_size] = 10` + +#### `AppConfig[:max_page_size]` + +Requests to the backend api can define a custom page_size param. This is the maximum allowed page size. +`AppConfig[:max_page_size] = 250` + +### Cookie prefix + +#### `AppConfig[:cookie_prefix]` + +A prefix added to cookies used by the application. +Change this if you're running more than one instance of ArchivesSpace on the +same hostname (i.e. multiple instances on different ports). +Default is "archivesspace". + +`AppConfig[:cookie_prefix] = "archivesspace"` + +### SUI Indexer settings + +The size of each batch of records passed to each indexer worker-thread to process and push to solr. +The periodic indexer can run using multiple threads to take advantage of +multiple CPU cores. By setting these two options, you can control how many +CPU cores are used, and the amount of memory that will be consumed by the +indexing process (more cores and/or more records per thread means more memory used). + +#### `AppConfig[:indexer_records_per_thread]` + +The size of each batch of records passed to each indexer worker-thread to process and push to solr. More records per thread means that more memory will be used by the indexer process. +`AppConfig[:indexer_records_per_thread] = 25` + +#### `AppConfig[:indexer_thread_count]` + +The number of worker-thread to be used by the SUI indexer. More worker-threads means that more CPU cores will be used. +`AppConfig[:indexer_thread_count] = 4` + +#### `AppConfig[:indexer_solr_timeout_seconds]` + +The indexer is making requests to solr in order to push updated records to the solr index. This is the maximum number of seconds that the indexer will wait for solr to respond to a request. + +`AppConfig[:indexer_solr_timeout_seconds] = 300` + +### PUI Indexer Settings + +#### `AppConfig[:pui_indexer_enabled]` + +If false no pui indexer is started. Set to false if not using the PUI at all. +`AppConfig[:pui_indexer_enabled] = true` + +#### `AppConfig[:pui_indexing_frequency_seconds]` + +The number of seconds between each run of the PUI indexer. The indexer will perform and indexing cycle every configured number of seconds. +`AppConfig[:pui_indexing_frequency_seconds] = 30` + +#### `AppConfig[:pui_indexer_records_per_thread]` + +The size of each batch of records passed to each indexer worker-thread to process and push to solr. +The PUI indexer can run using multiple threads to take advantage of +multiple CPU cores. By setting these two options, you can control how many +CPU cores are used, and the amount of memory that will be consumed by the +indexing process (more cores and/or more records per thread means more memory used). + +`AppConfig[:pui_indexer_records_per_thread] = 25` + +#### `AppConfig[:pui_indexer_thread_count]` + +The number of worker-thread to be used by the PUI indexer. More worker-threads means that more CPU cores will be used. +`AppConfig[:pui_indexer_thread_count] = 1` + +### Index state + +#### `AppConfig[:index_state_class]` + +The indexer needs a place to store it's state (keep track of which records have already been indexed). +Set to 'IndexState' (default) to store the state in the local `data` directory. +Set to 'IndexStateS3' (optional) to store the state in an AWS S3 bucket in the Amazon Cloud. + +`AppConfig[:index_state_class] = 'IndexState'` + +#### `AppConfig[:index_state_s3]` - Relevant only when using S3 storage for the indexer state + +If using S3 storage for the indexer state in amazon s3 (optional), you need to configure the access to S3. + +NOTE: S3 charges for read / update requests and the pui indexer is continually +writing to state files so you may want to increase `pui_indexing_frequency_seconds` and `solr_indexing_frequency_seconds` + +##### Configuring S3 access using environment variables (default) + +By default, the S3 configuration is fetched from the following shell environment variables: + +- `AWS_REGION` +- `AWS_ACCESS_KEY_ID` +- `AWS_SECRET_ACCESS_KEY` +- `AWS_ASPACE_BUCKET` + +It is using the `:cookie_prefix` configuration as a prefix for the state files stored in the bucket - usefull when using the same bucket to store indexer state of multiple archivesspace instances. + +##### Configuring S3 access using AppConfig variable in the `config.rb` file + +```ruby +AppConfig[:index_state_s3] = { + region: "us-east-1", + aws_access_key_id: "ASIAXXXXEXAMPLEID", + aws_secret_access_key: "xXxxXXxxXX/XXXXXX/XXXXXXXEXAMPLEKEY", + bucket: ENV.fetch("my-as-test-bucket"), + prefix: proc { "#{AppConfig[:cookie_prefix]}_" }, +} +``` + +You can use `prefix: "some random string"` instead of the above code that used the `:cookie_prefix` AppConfig variable. + +### Misc. database options + +#### `AppConfig[:allow_other_unmapped]` + +Allow assigning the special enumeration value `other_unmapped` for dynamic enum (controlled value) fields. When set to `true` `other_unmapped` is treated as a valid value for all enumeration (controlled value) fields. The `other_unmapped` value is added as a possible value for all controlled value lists. +This feature is designed for handling unmapped or unknown enumeration values, eventually useful during data migrations where source data may have values not yet defined in controlled value lists, or generally importing external data that uses values that are not already defined in a controlled value list. + +`AppConfig[:allow_other_unmapped] = false` + +#### `AppConfig[:db_url_redacted]` + +This is how the database url (which includes the database username and password) will appear in the logs. The default replaces the username and password with `REDACTED`, so that: +`"user=john&password=secret123"` +becomes +`"user=[REDACTED]&password=[REDACTED]"` + +`AppConfig[:db_url_redacted] = proc { AppConfig[:db_url].gsub(/(user|password)=(.*?)(&|$)/, '\1=[REDACTED]\3') }` + +#### `AppConfig[:demo_db_backup_schedule]` + +When using the embedded demo database (Apache Derby instead of MySQL) this is the schedule of the automated backups, in cron format. By default, it is at 4AM every day. + +`AppConfig[:demo_db_backup_schedule] = "0 4 * * *"` + +#### `AppConfig[:demo_db_backup_number_to_keep] = 7` + +How many backups to keep available when using the embedded demo database + +`AppConfig[:demo_db_backup_number_to_keep] = 7` + +#### `AppConfig[:allow_unsupported_database]` + +Set this to true if you are determined to use a database other than MySQL or the embedded demo database based on Apache Derby (not-recommended!). + +`AppConfig[:allow_unsupported_database] = false` + +#### `AppConfig[:allow_non_utf8_mysql_database]` + +Set this to true to skip the standard validation of the character encoding of MySQL tables being set to UTF8 (not-recommended!). + +`AppConfig[:allow_non_utf8_mysql_database] = false` + +### Proxy URLs + +If you are serving user-facing applications via proxy +(i.e., another domain or port, or via https, or for a prefix) it is +recommended that you record those URLs in your configuration + +#### `AppConfig[:frontend_proxy_url] = proc { AppConfig[:frontend_url] }` + +Proxy URL for the frontend (staff interface) + +`AppConfig[:frontend_proxy_url] = proc { AppConfig[:frontend_url] }` + +#### `AppConfig[:public_proxy_url]` + +Proxy URL for the public interface + +`AppConfig[:public_proxy_url] = proc { AppConfig[:public_url] }` + +#### `AppConfig[:oai_proxy_url]` + +Proxy URL for the oai service (if exposed, see OAI section) + +`AppConfig[:oai_proxy_url] = 'http://your-public-oai-url.example.com'` + +#### `AppConfig[:frontend_proxy_prefix]` + +Don't override this setting unless you know what you're doing + +`AppConfig[:frontend_proxy_prefix] = proc { "#{URI(AppConfig[:frontend_proxy_url]).path}/".gsub(%r{/+$}, "/") }` + +#### `AppConfig[:public_proxy_prefix]` + +Don't override this setting unless you know what you're doing + +`AppConfig[:public_proxy_prefix] = proc { "#{URI(AppConfig[:public_proxy_url]).path}/".gsub(%r{/+$}, "/") }` + +### Enable component applications + +Setting any of these false will prevent the associated applications from starting. +Temporarily disabling the frontend and public UIs and/or the indexer may help users +who are running into memory-related issues during migration. + +#### `AppConfig[:enable_backend]` + +`AppConfig[:enable_backend] = true` + +#### `AppConfig[:enable_frontend]` + +`AppConfig[:enable_frontend] = true` + +#### `AppConfig[:enable_public]` + +`AppConfig[:enable_public] = true` + +#### `AppConfig[:enable_solr]` + +`AppConfig[:enable_solr] = true` + +#### `AppConfig[:enable_indexer]` + +`AppConfig[:enable_indexer] = true` + +#### `AppConfig[:enable_docs]` + +`AppConfig[:enable_docs] = true` + +#### `AppConfig[:enable_oai]` + +`AppConfig[:enable_oai] = true` + +### Jetty shutdown + +Some use cases want the ability to shutdown the Jetty service using Jetty's +ShutdownHandler, which allows a POST request to a specific URI to signal +server shutdown. The prefix for this URI path is set to `/xkcd` to reduce the +possibility of a collision in the path configuration. So, full path would be + +`/xkcd/shutdown?token={randomly generated password}` + +The launcher creates a password to use this, which is stored +in the data directory. This is not turned on by default. + +#### `AppConfig[:use_jetty_shutdown_handler]` + +`AppConfig[:use_jetty_shutdown_handler] = false` + +#### `AppConfig[:jetty_shutdown_path]` + +`AppConfig[:jetty_shutdown_path] = "/xkcd"` + +### Managing multile backend instances + +If you have multiple instances of the backend running behind a load +balancer, list the URL of each backend instance here. This is used by the +real-time indexing, which needs to connect directly to each running +instance. + +By default we assume you're not using a load balancer, so we just connect +to the regular backend URL. + +#### `AppConfig[:backend_instance_urls]` + +`AppConfig[:backend_instance_urls] = proc { [AppConfig[:backend_url]] }` + +### Theme + +For theming customization, see https://docs.archivesspace.org/customization/theming/ + +#### `AppConfig[:frontend_theme]` + +Name of the theme to use on the Staff UI + +`AppConfig[:frontend_theme] = "default"` + +#### `AppConfig[:public_theme]` + +Name of the theme to use on the Public UI + +`AppConfig[:public_theme] = "default"` + +### Session expiration + +#### `AppConfig[:session_expire_after_seconds]` + +Sessions marked as expirable will timeout after this number of seconds of inactivity + +`AppConfig[:session_expire_after_seconds] = 3600` + +#### `AppConfig[:session_nonexpirable_force_expire_after_seconds]` + +Sessions marked as non-expirable will eventually expire too, but after a longer period. + +`AppConfig[:session_nonexpirable_force_expire_after_seconds] = 604800` + +### System usernames + +Hidden (not viewable on the Staff UI User management) system users are automatically created to be used by the indexer, the PUI and the Staff UI in order to access the backend API. + +#### `AppConfig[:search_username]` + +The user name of the hidden system user that the indexer uses to access the backend API +`AppConfig[:search_username] = "search_indexer"` + +#### `AppConfig[:public_username]` + +The user name of the hidden system user that the PUI uses to access the backend API + +`AppConfig[:public_username] = "public_anonymous"` + +#### `AppConfig[:staff_username]` + +The user name of the hidden system user that the Staff UI uses to access the backend API + +`AppConfig[:staff_username] = "staff_system"` + +### Authentication sources + +ArchivesSpace comes with its own user management functionality but can also be configured to authenticate against one or more [LDAP directories](/customization/ldap/). Oauth authentication is available using the [aspace-oauth plugin](https://github.com/lyrasis/aspace-oauth) + +`AppConfig[:authentication_sources] = []` + +### Misc. backlog and snapshot settings + +#### `AppConfig[:realtime_index_backlog_ms]` + +> TODO - Needs more documentation + +`AppConfig[:realtime_index_backlog_ms] = 60000` + +### Notifications configuration + +An internal notification mechanism is used to keep user preferences, enumeration (controlled value list) values, repository information etc. up to date within the UI while minimizing requests to the backend API. + +#### `AppConfig[:notifications_backlog_ms]` + +Notifications older that this amount of miliseconds are considered expired and will not be announced anymore. + +`AppConfig[:notifications_backlog_ms] = 60000` + +#### `AppConfig[:notifications_poll_frequency_ms]` + +How often should notifications be announced. + +`AppConfig[:notifications_poll_frequency_ms] = 1000` + +#### `AppConfig[:max_usernames_per_source]` + +> TODO - Needs more documentation + +`AppConfig[:max_usernames_per_source] = 50` + +#### `AppConfig[:demodb_snapshot_flag]` + +> TODO - Needs more documentation + +`AppConfig[:demodb_snapshot_flag] = proc { File.join(AppConfig[:data_directory], "create_demodb_snapshot.txt") }` + +### Report Configuration + +#### `AppConfig[:report_page_layout]` + +Uses valid values for the CSS3 @page directive's size property: +http://www.w3.org/TR/css3-page/#page-size-prop + +`AppConfig[:report_page_layout] = "letter"` + +#### `AppConfig[:report_pdf_font_paths]` + +> TODO - Needs more documentation + +`AppConfig[:report_pdf_font_paths] = proc { ["#{AppConfig[:backend_url]}/reports/static/fonts/dejavu/DejaVuSans.ttf"] }` + +#### `AppConfig[:report_pdf_font_family]` + +> TODO - Needs more documentation + +`AppConfig[:report_pdf_font_family] = "\"DejaVu Sans\", sans-serif"` + +### Plugins directory + +#### `AppConfig[:plugins_directory]` + +By default, the plugins directory will be in your ASpace Home. +If you want to override that, update this with an absolute path + +`AppConfig[:plugins_directory] = "plugins"` + +### Feedback + +#### `AppConfig[:feedback_url]` + +URL to direct the feedback link. +You can remove this from the footer by making the value blank. + +`AppConfig[:feedback_url] = "http://archivesspace.org/contact"` + +### User registration + +#### `AppConfig[:allow_user_registration]` + +Allow an unauthenticated user to create an account + +`AppConfig[:allow_user_registration] = true` + +### Help Configuration + +#### `AppConfig[:help_enabled]` + +> TODO - Needs more documentation + +`AppConfig[:help_enabled] = true` + +#### `AppConfig[:help_url]` + +> TODO - Needs more documentation + +`AppConfig[:help_url] = "https://archivesspace.atlassian.net/wiki/spaces/ArchivesSpaceUserManual/overview"`` + +#### `AppConfig[:help_topic_base_url]` + +> TODO - Needs more documentation + +`AppConfig[:help_topic_base_url] = "https://archivesspace.atlassian.net/wiki/spaces/ArchivesSpaceUserManual/pages/"`` + +### Shared storage + +#### `AppConfig[:shared_storage]` + +`AppConfig[:shared_storage] = proc { File.join(AppConfig[:data_directory], "shared") }` + +### Background jobs + +#### `AppConfig[:job_file_path]` + +Formerly known as :import_job_path + +> TODO - Needs more documentation + +`AppConfig[:job_file_path] = proc { AppConfig.has_key?(:import_job_path) ? AppConfig[:import_job_path] : File.join(AppConfig[:shared_storage], "job_files") }` + +#### `AppConfig[:job_poll_seconds]` + +> TODO - Needs more documentation + +`AppConfig[:job_poll_seconds] = proc { AppConfig.has_key?(:import_poll_seconds) ? AppConfig[:import_poll_seconds] : 5 }` + +#### `AppConfig[:job_timeout_seconds]` + +> TODO - Needs more documentation + +`AppConfig[:job_timeout_seconds] = proc { AppConfig.has_key?(:import_timeout_seconds) ? AppConfig[:import_timeout_seconds] : 300 }` + +#### `AppConfig[:jobs_cancelable]` + +By default, only allow jobs to be cancelled if we're running against MySQL (since we can rollback) + +`AppConfig[:jobs_cancelable] = proc { (AppConfig[:db_url] != AppConfig.demo_db_url).to_s }` + +### Locations + +#### `AppConfig[:max_location_range]` + +> TODO - Needs more documentation + +`AppConfig[:max_location_range] = 1000` + +### Schema Info check + +#### `AppConfig[:ignore_schema_info_check]` + +ASpace backend will not start if the db's schema_info version is not set +correctly for this version of ASPACE. This is to ensure that all the +migrations have run and completed before starting the app. You can override +this check here. Do so at your own peril. + +`AppConfig[:ignore_schema_info_check] = false` + +### Demo data + +#### `AppConfig[:demo_data_url]` + +This is a URL that points to some demo data that can be used for testing, +teaching, etc. To use this, set an OS environment variable of ASPACE_DEMO = true + +`AppConfig[:demo_data_url] = "https://s3-us-west-2.amazonaws.com/archivesspacedemo/latest-demo-data.zip"` + +### External IDs + +#### `AppConfig[:show_external_ids]` + +Expose external ids in the frontend + +`AppConfig[:show_external_ids] = false` + +### Jetty request/response buffer + +Set the allowed size of the request/response header that Jetty will accept +(anything bigger gets a 403 error). Note if you want to jack this size up, +you will also have to configure your Nginx/Apache as well if you're using that + +#### `AppConfig[:jetty_response_buffer_size_bytes]` + +`AppConfig[:jetty_response_buffer_size_bytes] = 64 * 1024` + +#### `AppConfig[:jetty_request_buffer_size_bytes]` + +`AppConfig[:jetty_request_buffer_size_bytes] = 64 * 1024` + +### Container management configuration fields + +#### `AppConfig[:container_management_barcode_length]` + +Defines global and repo-level barcode validations (validating on length only). +Barcodes that have either no value, or a value between :min and :max, will validate on save. +Set global constraints via :system_default, and use the repo_code value for repository-level constraints. +Note that :system_default will always inherit down its values when possible. + +`AppConfig[:container_management_barcode_length] = {:system_default => {:min => 5, :max => 10}, 'repo' => {:min => 9, :max => 12}, 'other_repo' => {:min => 9, :max => 9} }` + +#### `AppConfig[:container_management_extent_calculator]` + +Globally defines the behavior of the exent calculator. +Use :report_volume (true/false) to define whether space should be reported in cubic +or linear dimensions. +Use :unit (:feet, :inches, :meters, :centimeters) to define the unit which the calculator +reports extents in. +Use :decimal_places to define how many decimal places the calculator should return. + +Example: + +`AppConfig[:container_management_extent_calculator] = { :report_volume => true, :unit => :feet, :decimal_places => 3 }` + +### Record inheritance in public interface + +#### `AppConfig[:record_inheritance]` + +Define the fields for a record type that are inherited from ancestors +if they don't have a value in the record itself. +This is used in common/record_inheritance.rb and was developed to support +the new public UI application. +Note - any changes to record_inheritance config will require a reindex of pui +records to take affect. To do this remove files from indexer_pui_state + +```ruby +AppConfig[:record_inheritance] = { + :archival_object => { + :inherited_fields => [ + { + :property => 'title', + :inherit_directly => true + }, + { + :property => 'component_id', + :inherit_directly => false + }, + { + :property => 'language', + :inherit_directly => true + }, + { + :property => 'dates', + :inherit_directly => true + }, + { + :property => 'extents', + :inherit_directly => false + }, + { + :property => 'linked_agents', + :inherit_if => proc {|json| json.select {|j| j['role'] == 'creator'} }, + :inherit_directly => false + }, + { + :property => 'notes', + :inherit_if => proc {|json| json.select {|j| j['type'] == 'accessrestrict'} }, + :inherit_directly => true + }, + { + :property => 'notes', + :inherit_if => proc {|json| json.select {|j| j['type'] == 'scopecontent'} }, + :inherit_directly => false + }, + { + :property => 'notes', + :inherit_if => proc {|json| json.select {|j| j['type'] == 'langmaterial'} }, + :inherit_directly => false + }, + ] + } +} +``` + +To enable composite identifiers - added to the merged record in a property +`\_composite_identifier` + +The values for `:include_level` and `:identifier_delimiter` shown here are the defaults + +If `:include_level` is set to true then level values (eg Series) will be included in `\_composite_identifier` + +The `:identifier_delimiter` is used when joining the four part identifier for resources + +```ruby +AppConfig[:record_inheritance][:archival_object][:composite_identifiers] = { + :include_level => false, + :identifier_delimiter => ' ' +} +``` + +To configure additional elements to be inherited use this pattern in your config + +```ruby +AppConfig[:record_inheritance][:archival_object][:inherited_fields] << + { + :property => 'linked_agents', + :inherit_if => proc {|json| json.select {|j| j['role'] == 'subject'} }, + :inherit_directly => true + } +``` + +... or use this pattern to add many new elements at once + +```ruby +AppConfig[:record_inheritance][:archival_object][:inherited_fields].concat( + [ + { + :property => 'subjects', + :inherit_if => proc {|json| + json.select {|j| + ! j['_resolved']['terms'].select { |t| t['term_type'] == 'topical'}.empty? } + }, + :inherit_directly => true + }, + { + :property => 'external_documents', + :inherit_directly => false + }, + { + :property => 'rights_statements', + :inherit_directly => false + }, + { + :property => 'instances', + :inherit_directly => false + }, + ]) +``` + +If you want to modify any of the default rules, the safest approach is to uncomment +the entire default record_inheritance config and make your changes. +For example, to stop scopecontent notes from being inherited into file or item records +uncomment the entire record_inheritance default config above, and add a skip_if +clause to the scopecontent rule, like this: + +```ruby + { + :property => 'notes', + :skip_if => proc {|json| ['file', 'item'].include?(json['level']) }, + :inherit_if => proc {|json| json.select {|j| j['type'] == 'scopecontent'} }, + :inherit_directly => false + }, +``` + +### PUI Configurations + +#### `AppConfig[:pui_search_results_page_size]` + +`AppConfig[:pui_search_results_page_size] = 10` + +#### `AppConfig[:pui_branding_img]` + +`AppConfig[:pui_branding_img] = 'archivesspace.small.png'` + +#### `AppConfig[:pui_block_referrer]` + +`AppConfig[:pui_block_referrer] = true # patron privacy; blocks full 'referer' when going outside the domain` + +#### `AppConfig[:pui_max_concurrent_pdfs]` + +The number of PDFs we'll generate (in the background) at the same time. + +PDF generation can be a little memory intensive for large collections, so we +set this fairly low out of the box. + +`AppConfig[:pui_max_concurrent_pdfs] = 2` + +#### `AppConfig[:pui_pdf_timeout]` + +You can set this to nil or zero to prevent a timeout + +`AppConfig[:pui_pdf_timeout] = 600` + +#### `AppConfig[:pui_hide]` + +`AppConfig[:pui_hide] = {}` + +The following determine which 'tabs' are on the main horizontal menu: + +```ruby +AppConfig[:pui_hide][:repositories] = false +AppConfig[:pui_hide][:resources] = false +AppConfig[:pui_hide][:digital_objects] = false +AppConfig[:pui_hide][:accessions] = false +AppConfig[:pui_hide][:subjects] = false +AppConfig[:pui_hide][:agents] = false +AppConfig[:pui_hide][:classifications] = false +AppConfig[:pui_hide][:search_tab] = false +``` + +The following determine globally whether the various "badges" appear on the Repository page +can be overriden at repository level below (e.g.: +`AppConfig[:repos][{repo_code}][:hide][:counts] = true` + +```ruby +AppConfig[:pui_hide][:resource_badge] = false +AppConfig[:pui_hide][:record_badge] = true # hide by default +AppConfig[:pui_hide][:digital_object_badge] = false +AppConfig[:pui_hide][:accession_badge] = false +AppConfig[:pui_hide][:subject_badge] = false +AppConfig[:pui_hide][:agent_badge] = false +AppConfig[:pui_hide][:classification_badge] = false +AppConfig[:pui_hide][:counts] = false +``` + +The following determines globally whether the 'container inventory' navigation +tab/pill is hidden on resource/collection page + +``` +AppConfig[:pui_hide][:container_inventory] = false +``` + +#### `AppConfig[:pui_requests_permitted_for_types]` + +Determine when the request button is displayed + +`AppConfig[:pui_requests_permitted_for_types] = [:resource, :archival_object, :accession, :digital_object, :digital_object_component]` + +#### `AppConfig[:pui_requests_permitted_for_containers_only]` + +Set to 'true' if you want to disable if there is no top container + +`AppConfig[:pui_requests_permitted_for_containers_only] = false` + +#### `AppConfig[:pui_repos]` + +Repository-specific examples. Replace {repo_code} with your repository code, i.e. 'foo' - note the lower-case + +`AppConfig[:pui_repos] = {}` + +Examples: + +For a particular repository, only enable requests for certain record types (Note this configuration will override AppConfig[:pui_requests_permitted_for_types] for the repository) + +```ruby +AppConfig[:pui_repos]['foo'][:requests_permitted_for_types] = [:resource, :archival_object, :accession, :digital_object, :digital_object_component] +``` + +For a particular repository, disable request + +```ruby +AppConfig[:pui_repos]['foo'][:requests_permitted_for_containers_only] = true +``` + +Set the email address to send any repository requests: + +```ruby +AppConfig[:pui_repos]['foo'][:request_email] = {email address} +``` + +> TODO - Needs more documentation here + +```ruby +AppConfig[:pui_repos]['foo'][:hide] = {} +AppConfig[:pui_repos]['foo'][:hide][:counts] = true +``` + +#### `AppConfig[:pui_display_deaccessions]` + +> TODO - Needs more documentation + +`AppConfig[:pui_display_deaccessions] = true` + +#### `AppConfig[:pui_page_actions_cite]` + +Enable / disable PUI resource/archival object page 'cite' action + +`AppConfig[:pui_page_actions_cite] = true` + +#### `AppConfig[:pui_page_actions_bookmark]` + +Enable / disable PUI resource/archival object page 'bookmark' action + +`AppConfig[:pui_page_actions_bookmark] = true` + +#### `AppConfig[:pui_page_actions_request]` + +Enable / disable PUI resource/archival object page 'request' action + +`AppConfig[:pui_page_actions_request] = true` + +#### `AppConfig[:pui_page_actions_print]` + +Enable / disable PUI resource/archival object page 'print' action + +`AppConfig[:pui_page_actions_print] = true` + +#### `AppConfig[:pui_enable_staff_link]` + +When a user is authenticated, add a link back to the staff interface from the specified record + +`AppConfig[:pui_enable_staff_link] = true` + +#### `AppConfig[:pui_staff_link_mode]` + +By default, staff link will open record in staff interface in edit mode, +change this to 'readonly' for it to open in readonly mode + +`AppConfig[:pui_staff_link_mode] = 'edit'` + +#### `AppConfig[:pui_page_custom_actions]` + +Add page actions via the configuration + +`AppConfig[:pui_page_custom_actions] = []` + +Javascript action example: + +```ruby +AppConfig[:pui_page_custom_actions] << { + 'record_type' => ['resource', 'archival_object'], # the jsonmodel type to show for + 'label' => 'actions.do_something', # the I18n path for the action button + 'icon' => 'fa-paw', # the font-awesome icon CSS class + 'onclick_javascript' => 'alert("do something grand");', +} +``` + +Hyperlink action example: + +```ruby +AppConfig[:pui_page_custom_actions] << { + 'record_type' => ['resource', 'archival_object'], # the jsonmodel type to show for + 'label' => 'actions.do_something', # the I18n path for the action button + 'icon' => 'fa-paw', # the font-awesome icon CSS class + 'url_proc' => proc {|record| 'http://example.com/aspace?uri='+record.uri}, +} +``` + +Form-POST action example: + +```ruby +AppConfig[:pui_page_custom_actions] << { + 'record_type' => ['resource', 'archival_object'], # the jsonmodel type to show for + 'label' => 'actions.do_something', # the I18n path for the action button + 'icon' => 'fa-paw', # the font-awesome icon CSS class + # 'post_params_proc' returns a hash of params which populates a form with hidden inputs ('name' => 'value') + 'post_params_proc' => proc {|record| {'uri' => record.uri, 'display_string' => record.display_string} }, + # 'url_proc' returns the URL for the form to POST to + 'url_proc' => proc {|record| 'http://example.com/aspace?uri='+record.uri}, + # 'form_id' as string to be used as the form's ID + 'form_id' => 'my_grand_action', +} +``` + +ERB action example: + +```ruby +AppConfig[:pui_page_custom_actions] << { + 'record_type' => ['resource', 'archival_object'], + # the jsonmodel type to show for + # 'erb_partial' returns the path to an erb template from which the action will be rendered + 'erb_partial' => 'shared/my_special_action', +} +``` + +#### `AppConfig[:pui_email_enabled]` + +PUI email settings (logs emails when disabled) + +`AppConfig[:pui_email_enabled] = false` + +#### `AppConfig[:pui_email_override]` + +See above AppConfig[:pui_repos][{repo_code}][:request_email] for setting repository email overrides +'pui_email_override' for testing, this email will be the to-address for all sent emails + +`AppConfig[:pui_email_override] = 'testing@example.com'` + +#### `AppConfig[:pui_request_email_fallback_to_address]` + +The 'to' email address for repositories that don't define their own email + +`AppConfig[:pui_request_email_fallback_to_address] = 'testing@example.com'` + +#### `AppConfig[:pui_request_email_fallback_from_address]` + +The 'from' email address for repositories that don't define their own email + +`AppConfig[:pui_request_email_fallback_from_address] = 'testing@example.com'` + +#### `AppConfig[:pui_request_use_repo_email]` + +Use the repository record email address for requests (overrides config email) + +`AppConfig[:pui_request_use_repo_email] = false` + +#### `AppConfig[:pui_email_delivery_method]` + +`AppConfig[:pui_email_delivery_method] = :sendmail` + +#### `AppConfig[:pui_email_sendmail_settings]` + +```ruby +AppConfig[:pui_email_sendmail_settings] = { + location: '/usr/sbin/sendmail', + arguments: '-i' +} +``` + +#### `AppConfig[:pui_email_smtp_settings]` + +Apply when `AppConfig[:pui_email_delivery_method]` set to `:smtp` + +Example SMTP configuration: + +```ruby +AppConfig[:pui_email_smtp_settings] = { + address: 'smtp.gmail.com', + port: 587, + domain: 'gmail.com', + user_name: '<username>', + password: '<password>', + authentication: 'plain', + enable_starttls_auto: true, +} +``` + +#### `AppConfig[:pui_email_perform_deliveries]` + +`AppConfig[:pui_email_perform_deliveries] = true` + +#### `AppConfig[:pui_email_raise_delivery_errors]` + +`AppConfig[:pui_email_raise_delivery_errors] = true` + +#### `AppConfig[:pui_readmore_max_characters]` + +The number of characters to truncate before showing the 'Read More' link on notes + +`AppConfig[:pui_readmore_max_characters] = 450` + +#### `AppConfig[:pui_expand_all]` + +Whether to expand all additional information blocks at the bottom of record pages by default. `true` expands all blocks, `false` collapses all blocks. + +`AppConfig[:pui_expand_all] = false` + +#### `AppConfig[:max_search_columns]` + +Use to specify the maximum number of columns to display when searching or browsing + +`AppConfig[:max_search_columns] = 7` diff --git a/src/content/docs/fr/customization/index.md b/src/content/docs/fr/customization/index.md new file mode 100644 index 0000000..fd97d72 --- /dev/null +++ b/src/content/docs/fr/customization/index.md @@ -0,0 +1,13 @@ +--- +title: Customization and configuration +description: Index of the pages within the Customization section of the website. +--- + +- [Configuring ArchivesSpace](./configuration) +- [Configuring LDAP authentication](./ldap) +- [Adding support for additional username/password-based authentication backends](./authentication) +- [Customizing text in ArchivesSpace](./locales) +- [ArchivesSpace Plug-ins](./plugins) +- [Theming ArchivesSpace](./theming) +- [Managing frontend assets with Bower](./bower) +- [Adding custom reports](./reports) diff --git a/src/content/docs/fr/customization/ldap.md b/src/content/docs/fr/customization/ldap.md new file mode 100644 index 0000000..ca4ac29 --- /dev/null +++ b/src/content/docs/fr/customization/ldap.md @@ -0,0 +1,70 @@ +--- +title: LDAP authentication +description: Instructions on how to manage and authenticate against one or more LDAP directories. +--- + +ArchivesSpace can manage its own user directory, but can also be +configured to authenticate against one or more LDAP directories by +specifying them in the application's configuration file. When a user +attempts to log in, each authentication source is tried until one +matches. + +Here is a minimal example of an LDAP configuration: + +```ruby +AppConfig[:authentication_sources] = [{ + :model => 'LDAPAuth', + :hostname => 'ldap.example.com', + :port => 389, + :base_dn => 'ou=people,dc=example,dc=com', + :username_attribute => 'uid', + :attribute_map => {:cn => :name}, +}] +``` + +With this configuration, ArchivesSpace performs authentication by +connecting to `ldap://ldap.example.com:389/`, binding anonymously, +searching the `ou=people,dc=example,dc=com` tree for `uid = <username>`. + +If the user is found, ArchivesSpace authenticates them by +binding using the password specified. Finally, the `:attribute_map` +entry specifies how LDAP attributes should be mapped to ArchivesSpace +user attributes (mapping LDAP's `cn` to ArchivesSpace's `name` in the +above example). + +Many LDAP directories don't support anonymous binding. To integrate +with such a directory, you will need to specify the username and +password of a user with permission to connect to the directory and +search for other users. Modifying the previous example for this case +looks like this: + +```ruby +AppConfig[:authentication_sources] = [{ + :model => 'LDAPAuth', + :hostname => 'ldap.example.com', + :port => 389, + :base_dn => 'ou=people,dc=example,dc=com', + :username_attribute => 'uid', + :attribute_map => {:cn => :name}, + :bind_dn => 'uid=archivesspace_auth,ou=system,dc=example,dc=com', + :bind_password => 'secretsquirrel', +}] +``` + +Finally, some LDAP directories enforce the use of SSL encryption. To +configure ArchivesSpace to connect via LDAPS, change the port as +appropriate and specify the `encryption` option: + +```ruby +AppConfig[:authentication_sources] = [{ + :model => 'LDAPAuth', + :hostname => 'ldap.example.com', + :port => 636, + :base_dn => 'ou=people,dc=example,dc=com', + :username_attribute => 'uid', + :attribute_map => {:cn => :name}, + :bind_dn => 'uid=archivesspace_auth,ou=system,dc=example,dc=com', + :bind_password => 'secretsquirrel', + :encryption => :simple_tls, +}] +``` diff --git a/src/content/docs/fr/customization/locales.md b/src/content/docs/fr/customization/locales.md new file mode 100644 index 0000000..f408128 --- /dev/null +++ b/src/content/docs/fr/customization/locales.md @@ -0,0 +1,78 @@ +--- +title: Customizing text +description: Instructions for customizing text in ArchivesSpace using locale files, including how to override labels, messages, tooltips, and placeholders via the Rails I18n API. +--- + +ArchivesSpace has abstracted all the labels, messages and tooltips out of the +application into the locale files, which are part of the +[Rails Internationalization (I18n)](http://guides.rubyonrails.org/i18n.html) API. +The locales in this directory represent the +basis of translations for use by all Archives Space applications. Each +application may then add to or override these values with their own locales files. + +For a guide on managing these "i18n" files, please visit http://guides.rubyonrails.org/i18n.html + +You can see the source files for both the [Staff Frontend Application](https://github.com/archivesspace/archivesspace/tree/master/frontend/config/locales) and +[Public Application](https://github.com/archivesspace/archivesspace/tree/master/public/config/locales). There is also a [common locale file](https://github.com/archivesspace/archivesspace/blob/master/common/locales/en.yml) for some values used throughout the ArchivesSpace applications. + +The base translations are broken up: + +- The top most file "en.yml" contains the translations for all the record labels, messages and tooltips in English +- "enums/en.yml" contains the entries for the dynamic enumeration codes - add your translations to this file after importing your enumeration codes + +These values are pulled into the views using the I18n.t() method, like I18n.t("brand.welcome_message"). + +If the value you want to override is in the common locale file (like the "digital object title" field label, for example) , you can change this by simply editing the locales/en.yml file in your ArchivesSpace distribution home directory. A restart is required to have the changes take effect. + +If the value you want to change is in either the public or staff specific en.yml files, you can override these values using the plugins directory. For example, if you want to change the welcome message on the public frontend, make a file in your ArchivesSpace distribution called 'plugins/local/public/locales/en.yml' and put the following values: + +```yaml +en: + brand: + title: My Archive + home: Home + +welcome_message: HEY HEY HEY!! +``` + +If you restart ArchivesSpace, these values will take effect. + +If you are adding a new value you will also need to add the value into the Staff Frontend Application by clicking on the System dropdown menu and choosing Manage Controlled Value Lists. Select the list and add the value. If you restart ArchivesSpace the translation value that you set in the yml file should appear. + +If you're using a different language, simply swap out the en.yml for something else ( like fr.yml ) and update locale setting in the config.rb file ( i.e., AppConfig[:locale] = :fr ) + +## Tooltips + +To add a tooltip to a record label, simply add a new entry with "\_tooltip" +appended to the label's code. For example, to add a tooltip for the Accession's +Title field: + +```yaml +en: + accession: + title: Title + title_tooltip: | + <p>The title assigned to an accession or resource. The accession title + need not be the same as the resource title. Moreover, a title need not + be expressed for the accession record, as it can be implicitly + inherited from the resource record to which the accession is + linked.</p> +``` + +## Placeholders + +For text fields or text areas, you may like to have some placeholder text to be +displayed when the field is empty (for more details see +http://www.w3.org/html/wg/drafts/html/master/forms.html#the-placeholder-attribute). +Please note while most modern browser releases support this feature, +older version will not. + +To add a placeholder to a record's text field, add a new entry of the label's +code append with "\_placeholder". For example: + +```yaml +en: + accession: + title: Title + title_placeholder: See DACS 2.3.18-2.3.22 +``` diff --git a/src/content/docs/fr/customization/plugins.md b/src/content/docs/fr/customization/plugins.md new file mode 100644 index 0000000..c9c4f95 --- /dev/null +++ b/src/content/docs/fr/customization/plugins.md @@ -0,0 +1,343 @@ +--- +title: Plugins +description: An overview of how to develop, structure, enable, and configure plugins in ArchivesSpace to customize application behavior, interface, branding, and search functionality without altering core code. +--- + +Plugins are a powerful feature, designed to allow you to change +most aspects of how the application behaves. + +Plugins provide a mechanism to customize ArchivesSpace by overriding or extending functions +without changing the core codebase. As they are self-contained, they also permit the ready +sharing of packages of customization between ArchivesSpace instances. + +The ArchivesSpace distribution comes with the `hello_world` exemplar plugin. Please refer to its [README file](https://github.com/archivesspace/archivesspace/blob/master/plugins/hello_world/README.md) for a detailed description of how it is constructed and implemented. + +You can find other examples in the following plugin repositories. The ArchivesSpace plugins that are officially supported and maintained by the ArchivesSpace Program Team are in archivesspace-plugins (https://github.com/archivesspace-plugins). Deprecated code which is no longer supported but has been kept for future reference is in archivesspace-deprecated (https://github.com/archivesspace-deprecated). There is an open/unmanaged GitHub repository where community members can share their code called archivesspace-labs (https://github.com/archivesspace-labs). The community developed Python library for interacting with the ArchivesSpace API, called ArchivesSnake, is managed in the archivesspace-labs repository. + +## Enabling plugins + +Plugins are enabled by placing them in the `plugins` directory, and referencing them in the +ArchivesSpace configuration, `config/config.rb`. For example: + +```ruby +AppConfig[:plugins] = ['local', 'hello_world', 'my_plugin'] +``` + +This configuration assumes the following directories exist: + + plugins + hello_world + local + my_plugin + +Note that the order that the plugins are listed in the `:plugins` configuration option +determines the order in which they are loaded by the application. + +## Plugin structure + +The directory structure within a plugin is similar to the structure of the core application. +The following shows the supported plugin structure. Files contained in these directories can +be used to override or extend the behavior of the core application. + + backend + controllers ......... backend endpoints + model ............... database mapping models + converters .......... classes for importing data + job_runners ......... classes for defining background jobs + plugin_init.rb ...... if present, loaded when the backend first starts + lib/bulk_import ..... bulk import processor + frontend + assets .............. static assets (such as images, javascript) in the staff interface + controllers ......... controllers for the staff interface + locales ............. locale translations for the staff interface + views ............... templates for the staff interface + plugin_init.rb ...... if present, loaded when the staff interface first starts + public + assets .............. static assets (such as images, javascript) in the public interface + controllers ......... controllers for the public interface + locales ............. locale translations for the public interface + views ............... templates for the public interface + plugin_init.rb ...... if present, loaded when the public interface first starts + migrations ............ database migrations + schemas ............... JSONModel schema definitions + search_definitions.rb . Advanced search fields + +**Note** that `backend/lib/bulk_import` is the only directory in `backend/lib/` that is loaded by the plugin manager. Other files in `backend/lib/` will not be loaded during startup. + +**Note** that, in order to override or extend the behavior of core models and controllers, you cannot simply put your replacement with the same name in the corresponding directory path. Core models and controllers can be overridden by adding an `after_initialize` block to `plugin_init.rb` (e.g. [aspace-hvd-pui](https://github.com/harvard-library/aspace-hvd-pui/blob/master/public/plugin_init.rb#L43)). + +## Overriding behavior + +A general rule is: to override behavior, rather then extend it, match the path +to the file that contains the behavior to be overridden. + +It is not necessary for a plugin to have all of these directories. For example, to override +some part of a locale file for the staff interface, you can just add the following structure +to the local plugin: + + plugins/local/frontend/locales/en.yml + +More detailed information about overriding locale files is found in [Customizing text in ArchivesSpace](/customization/locales) + +## Overriding the visual (web) presentation + +You can directly override any view file in the core application by placing an erb file of the same name in the analogous path. +For example, if you want to override the appearance of the "Welcome" [home] page of the Public User Interface, you can make your changes to a file `show.html.erb` and place it at `plugins/my_fine_plugin/public/views/welcome/show.html.erb`. (Where _my_fine_plugin_ is the name of your plugin) + +### Implementing a broadly-applied style or javascript change + +Unless you want to write inline style or javascript (which may be practiceable for a template or two), best practice is to create `plugins/my_fine_plugin/public/views/layout_head.html.erb` or `plugins/my_fine_plugin/frontend/views/layout_head.html.erb`, which contains the HTML statements to incorporate your javascript or css into the `<HEAD>` element of the template. Here's an example: + +- For the public interface, I want to change the size of the text in all links when the user is hovering. + - I create `plugins/my_fine_plugin/public/assets/my.css`: + ```css + a:hover { + font-size: 2em; + } + ``` + - I create `plugins/my_fine_plugin/public/views/layout_head.html.erb`, and insert: + ```ruby + <%= stylesheet_link_tag "#{@base_url}/assets/my.css", media: :all %> + ``` +- For the public interface, I want to add some javascript behavior such that, when the user hovers over a list item, astericks appear + - I create `plugins/my_fine_plugin/public/assets/my.js`" + ```javascript + $(function () { + $('li').hover( + function () { + $(this).append($('<span> ***</span>')) + }, + function () { + $(this).find('span:last').remove() + } + ) + }) + ``` + - I add to `plugins/my_fine_plugin/public/views/layout_head.html.erb`: + ```ruby + <%= javascript_include_tag "#{@base_url}/assets/my.js" %> + ``` + +## Adding your own branding + +Another example, to override the branding of the staff interface, add +your own template at: + + plugins/local/frontend/views/site/\_branding.html.erb + +Files such as images, stylesheets and PDFs can be made available as static resources by +placing them in an `assets` directory under an enabled plugin. For example, the following file: + + plugins/local/frontend/assets/my_logo.png + +Will be available via the following URL: + + http://your.frontend.domain.and:port/assets/my_logo.png + +For example, to reference this logo from the custom branding file, use +markup such as: + +```erb + <div class="container branding"> + <img src="<%= #{AppConfig[:frontend_proxy_prefix]} %>assets/my_logo.png" alt="My logo" /> + </div> +``` + +## Customizing the favicon + +A favicon is an icon associated with a web page that browser and operating systems display (ie: in a browser's address bar or tab, next to the web page name in a bookmark list, etc.). + +### Default images + +The ArchivesSpace favicons are stored in the top-level `public/` directory of the frontend and public applications. + +1. `frontend/public/favicon-AS.png` +2. `frontend/public/favicon-AS.svg` +3. `public/public/favicon-AS.png` +4. `public/public/favicon-AS.svg` + +### Markup + +Favicon markup is found in each application's favicon partial template: + +1. `frontend/app/views/site/\_favicon.html.erb` +2. `public/app/views/shared/\_favicon.html.erb` + +### Configuration + +Favicons are shown by default via the configuration options in `config.rb` (or `common/config/config-defaults.rb` in development). Set the respective option to `false` to not show a favicon. + +```rb +# config.rb +AppConfig[:pui_show_favicon] = true # whether or not to show a favicon +AppConfig[:frontend_show_favicon] = true # whether or not to show a favicon +``` + +### Plugin examples + +Replace the default favicon with your own via a plugin. + +:::caution[Reserved favicon filenames] +Custom favicon files must be named something other than `favicon-AS.png` and `favicon-AS.svg` in order to override the default favicon. +::: + +#### Frontend + +The frontend plugin should have the following directory structure: + +``` +plugins/local/frontend/ +├── assets +│   ├── favicon.png +│   └── favicon.svg +└── views + └── site + └── _favicon.html.erb +``` + +The frontend favicon template should look something like: + +```erb +<!-- plugins/local/frontend/views/site/_favicon.html.erb --> +<link rel="icon" type="image/png" href="<%= AppConfig[:frontend_proxy_prefix] %>assets/favicon.png"> +<link rel="icon" type="text/svg+xml" href="<%= AppConfig[:frontend_proxy_prefix] %>assets/favicon.svg"> +``` + +#### Public + +The public plugin should have the following directory structure: + +``` +plugins/local/public/ +├── assets +│   ├── favicon.png +│   └── favicon.svg +└── views + └── shared + └── _favicon.html.erb +``` + +The public favicon template should look something like: + +```erb +<!-- plugins/local/public/views/shared/_favicon.html.erb --> +<link rel="icon" type="image/png" href="<%= asset_path('favicon.png', skip_pipeline: true) %>"> +<link rel="icon" type="image/svg+xml" href="<%= asset_path('favicon.svg', skip_pipeline: true) %>"> +``` + +## Plugin configuration + +Plugins can optionally contain a configuration file at `plugins/[plugin-name]/config.yml`. +This configuration file supports the following options: + + system_menu_controller + The name of a controller that will be accessible via a Plugins menu in the System toolbar + repository_menu_controller + The name of a controller that will be accessible via a Plugins menu in the Repository toolbar + parents + [record-type] + name + cardinality + ... + +`system_menu_controller` and `repository_menu_controller` specify the names of frontend controllers +that will be accessible via the system and repository toolbars respectively. A `Plugins` dropdown +will appear in the toolbars if any enabled plugins have declared these configuration options. The +controller name follows the standard naming conventions, for example: + +```ruby +repository_menu_controller: hello_world +``` + +Points to a controller file at `plugins/hello_world/frontend/controllers/hello_world_controller.rb` +which implements a controller class called `HelloWorldController`. When the menu item is selected +by the user, the `index` action is called on the controller. + +Note that the URLs for plugin controllers are scoped under `plugins`, so the URL for the above +example is: + + http://your.frontend.domain.and:port/plugins/hello_world + +Also note that the translation for the plugin's name in the `Plugins` dropdown menu is specified +in a locale file in the `frontend/locales` directory in the plugin. For example, in the `hello_world` +example there is an English locale file at: + + plugins/hello_world/frontend/locales/en.yml + +The translation for the plugin name in the `Plugins` dropdown menus is specified by the key `label` +under the plugin, like this: + +```yaml +en: + plugins: + hello_world: + label: Hello World +``` + +Note that the example locale file contains other keys that specify translations for text displayed +as part of the plugin's user interface. Be sure to place your plugin's translations as shown, under +`plugins.[your_plugin_name]` in order to avoid accidentally overriding translations for other +interface elements. In the example above, the translation for the `label` key can be referenced +directly in an erb view file as follows: + +```ruby +<%= I18n.t("plugins.hello_world.label") %> +``` + +Each entry under `parents` specifies a record type that this plugin provides a new subrecord for. +`[record-type]` is the name of the existing record type, for example `accession`. `name` is the +name of the plugin in its role as a subrecord of this parent, for example `hello_worlds`. +`cardinality` specifies the cardinality of the plugin records. Currently supported values are +`zero-to-many` and `zero-to-one`. + +## Changing search behavior + +A plugin can add additional fields to the advanced search interface by +including a `search_definitions.rb` file at the top-level of the +plugin directory. This file can contain definitions such as the +following: + +```ruby +AdvancedSearch.define_field(:name => 'payment_fund_code', :type => :enum, :visibility => [:staff], :solr_field => 'payment_fund_code_u_utext') +AdvancedSearch.define_field(:name => 'payment_authorizers', :type => :text, :visibility => [:staff], :solr_field => 'payment_authorizers_u_utext') +``` + +Each field defined will appear in the advanced search interface as a +searchable field. The `:visibility` option controls whether the field +is presented in the staff or public interface (or both), while the +`:type` parameter determines what sort of search is being performed. +Valid values are `:text:`, `:boolean`, `:date` and `:enum`. Finally, +the `:solr_field` parameter controls which field is used from the +underlying index. + +## Adding Custom Reports + +Custom reports may be added to plugins by adding a new report model as a subclass of `AbstractReport` to `plugins/[plugin-name]/backend/model/`, and the translations for said model to `plugins/[plugin-name]/frontend/locales/[language].yml`. Look to existing reports in reports subdirectory of the ArchivesSpace base directory for examples of how to structure a report model. + +There are several limitations to adding reports to plugins, including that reports from plugins may only use the generic report template. ArchivesSpace only searches for report templates in the reports subdirectory of the ArchivesSpace base directory, not in plugin directories. If you would like to implement a custom report with a custom template, consider adding the report to `archivesspace/reports/` instead of `archivesspace/plugins/[plugin-name]/backend/model/`. + +## Frontend Specific Hooks + +To make adding new records fields and sections to record forms a little eaiser via your plugin, the ArchivesSpace frontend provides a series of hooks via the `frontend/config/initializers/plugin.rb` module. These are as follows: + +- `Plugins.add_search_base_facets(*facets)` - add to the base facets list to include extra facets for all record searches and listing pages. + +- `Plugins.add_search_facets(jsonmodel_type, *facets)` - add facets for a particular JSONModel type to be included in searches and listing pages for that record type. + +- `Plugins.add_resolve_field(field_name)` - use this when you have added a new field/relationship and you need it to be resolved when the record is retrieved from the API. + +- `Plugins.register_edit_role_for_type(jsonmodel_type, role)` - when you add a new top level JSONModel, register it and its edit role so the listing view can determine if the "Edit" button can be displayed to the user. + +- `Plugins.register_note_types_handler(proc)` where proc handles parameters `jsonmodel_type, note_types, context` - allow a plugin to customize the note types shown for particular JSONModel type. For example, you can filter those that do not apply to your institution. + +- `Plugins.register_plugin_section(section)` - allows you define a template to be inserted as a section for a given JSONModel record. A section is a type of `Plugins::AbstractPluginSection` which defines the source `plugin`, section `name`, the `jsonmodel_types` for which the section should show and any `opts` required by the templates at the time of render. These new sections (readonly, edit and sidebar additions) are output as part of the `PluginHelper` render methods. + + `Plugins::AbstractPluginSection` can be subclassed to allow flexible inclusion of arbitrary HTML. There are two examples provided with ArchivesSpace: + - `Plugins::PluginSubRecord` - uses the `shared/subrecord` partial to output a standard styled ArchivesSpace section. `opts` requires the jsonmodel field to be defined. + + - `Plugins::PluginReadonlySearch` - uses the `search/embedded` partial to output a search listing as a section. `opts` requires the custom filter terms for this search to be defined. + +## Further information + +**Be sure to test your plugin thoroughly as it may have unanticipated impacts on your +ArchivesSpace application.** diff --git a/src/content/docs/fr/customization/reports.md b/src/content/docs/fr/customization/reports.md new file mode 100644 index 0000000..343513a --- /dev/null +++ b/src/content/docs/fr/customization/reports.md @@ -0,0 +1,51 @@ +--- +title: Reports +description: Instructions for creating custom reports and subreports in ArchivesSpace, including required structure, SQL usage, translations, optional customization methods, and integration with the reporting framework. +--- + +Adding a report is intended to be a fairly simple process. The requirements for creating a report are outlined below. + +## Adding a Report + +### Required + +- Create a class for your report that is a subclass of AbstractReport. +- Call register_report. If your report has any parameters, specify them here. +- Implement query_string + - This should be a raw SQL string + - To prevent SQL injection, use db.literal for any user input i.e. use `"select * from table where column = #{db.literal(value)}" ` instead of `"select * from table where column = '#{value}'"` +- Provide translations for column headers and the title of your report + - They should be in yml files under _language_.reports._report name_ + - The translation for title should be whatever you want the name of the report to be. + - If the translation you want is already in _language_.reports.translation_defaults (found in the static folder) you do not need to specify it. + - Translations specific to the individual report are given priority over translation defaults. + +### Optional + +- Implement your own initializer if your report has any parameters. +- Implement fix_row in order to clean up data and add subreports. + - Each result will be passed to fix_row as a hash + - ReportUtils offers various class methods to simplify cleaning up data. + - You can also add subreports here with something like `row[:subreport_name] = SubreportClassName.new(self, row[:id]).get_content` where row is the result as a hash which was a parameter to fix_row. See [Adding a Subreport](#adding-a-subreport) for more information on adding subreports. + - Sometimes you will want to delete something from the result that you needed in order to generate a subreport but do not want to show up in the final report (such as id). To do this use `row.delete(:id)`. +- Special implementation of query - The default implementation is simply `db.fetch(query_string)` but implementing it yourself may give you more flexibility. In the end, it needs to return a result set. +- There is a hash called info that controls what shows up in the header at the top of the report. Examples may include total record count, total extent, or any parameters that are provided by the user for your report. Add anything you want to show up in the report header to info. Repository name will be included automatically. Be sure to provide translations for the keys you add to info. +- after_tasks is run after fix_row executes on all the results. Implement this if you have anything that needs to get done here before the report is rendered +- Specify identifier_field if you want to add a heading to each individual record. For instance, identifier_field might be `:accession_number` for a report on accessions. +- Implement page_break to be false if you do not want a page break after each record in the PDF of the report. +- Implement special_translation if there is anything you want translate in a special way (i.e. it can't be accomplished by the yml file). + +## Adding A Subreport + +### Required + +- Create a class for your subreport that is a subclass of AbstractSubreport. +- Create an initializer that takes in the parent report/subreport as well as any parameters you need to run the subreport (usually this is just an id from the result in the parent report/subreport). Your initializer should call `super(parent_report)`. +- Implement query_string. This works the same way as it does for reports. +- Provide necessary translations. + +### Optional + +- Special implementation of query +- fix_row works just like in reports + - note that you can add nested subreports diff --git a/src/content/docs/fr/customization/theming.md b/src/content/docs/fr/customization/theming.md new file mode 100644 index 0000000..9e15c0a --- /dev/null +++ b/src/content/docs/fr/customization/theming.md @@ -0,0 +1,141 @@ +--- +title: Theming +description: A guide to customizing the look and feel of ArchivesSpace using plugins or full theme rebuilds, including instructions for changing logos, CSS, and layout elements in both the public and staff interfaces. +--- + +## Making small changes + +It's easiest to use a plugin for small changes to your site's theme. With a plugin, +we can override default views, controllers, models, etc. without having to do a +complete rebuild of the source code. Be sure to remove the `#` at the beginning of +any line that you want to change. Any line that starts with a `#` is ignored. + +Let's say we wanted to change the branding logo on the public +interface. That can be easily changed in your `config.rb` file: + +```ruby +AppConfig[:pui_branding_img] +``` + +That setting is used by the file found in `public/app/views/shared/_header.html.erb` to display your PUI side logo. You don't need to change that file, only the setting in your `config.rb` file. + +You can store the image in `plugins/local/public/assets/images/logo.png` You'll most likely need to create one or more of the directories. + +Your `AppConfig[:pui_branding_img]` setting should look something like this: + +```ruby +AppConfig[:pui_branding_img] = '/assets/images/logo.png' +``` + +Alt text for the PUI branding image can and should also be supplied via: + +```ruby +AppConfig[:pui_branding_img_alt_text] = 'My alt text' +``` + +Be sure to remove the `#` at the beginning of +any line that you want to change. Any line that starts with a `#` is ignored. + +If you want your image on the PUI to link out to another location, you will need to make a change to the file `public/app/views/shared/_header.html.erb`. The line that creates the logo just needs a `a href` added. You should also alter `AppConfig[:pui_branding_img_alt_text]` to make it clear that the image also functions as a link (e.g. `AppConfig[:pui_branding_img_alt_text] = 'Back to Example College Home'`). That will end up looking something like this: + +```erb +<div class="col-sm-3 hidden-xs"><a href="https://example.com"><img class="logo" src="<%= asset_path(AppConfig[:pui_branding_img]) %>" alt="<%= AppConfig[:pui_branding_img_alt_text] %>" /></a></div> +``` + +The Staff Side logo will need a small plugin file and cannot be set in your `config.rb` file. This needs to be changed in the `plugins/local/frontend/views/site/_branding.html.erb` file. You'll most likely need to create one or more of the directories. Then create that `_branding.html.erb` file and paste in the following code: + +```erb +<div class="container-fluid navbar-branding"> + <%= image_tag "archivesspace/archivesspace.small.png", :class=>"img-responsive", :alt=>"My image alt text" %> +</div> +``` + +Change the `"archivesspace/archivesspace.small.png"` to the path to your image `/assets/images/logo.png` and place your logo in the `plugins/local/frontend/assets/images/` directory. You'll most likely need to create one or more of the directories. + +**Note:** Since anything we add to plugins directory will not be precompiled by +the Rails asset pipeline, we cannot use some of the tag helpers +(like img_tag ), since that's assuming the asset is being managed by the +asset pipeline. + +Restart the application and you should see your logo in the default view. + +## Adding CSS rules + +You can customize CSS through the plugin system too. If you don't want to create +a whole new plugin, the easiest way is to modify the 'local' plugin that ships +with ArchivesSpace (it's intended for these kind of site-specific changes). As +long as you've still got 'local' listed in your AppConfig[:plugins] list, your +changes will get picked up. + +To do that, create a file called +`archivesspace/plugins/local/frontend/views/layout_head.html.erb` for the staff +side or `archivesspace/plugins/local/public/views/layout_head.html.erb` for the +public. Then you can add the line to include the CSS in the site: + +```erb +<%= stylesheet_link_tag "#{@base_url}/assets/custom.css" %> +``` + +Then place your CSS in the file: + + staff side: + archivesspace/plugins/local/frontend/assets/custom.css + or public side: + archivesspace/plugins/local/public/assets/custom.css + +and it will get loaded on each page. + +You may also want to make changes to the main index page, or the header and +footer. Those overrides would go into the following places for the public side +of your site: + + archivesspace/plugins/local/public/views/welcome/show.html.erb + archivesspace/plugins/local/public/views/shared/_header.html.erb + archivesspace/plugins/local/public/views/shared/_footer.html.erb + +## Heavy re-theming + +If you're wanting to really trick out your site, you could do this in a plugin +using the override methods shown above, although there are some big disadvantages +to this. The first is that assets will not be compiled by the Rails asset +pipeline. Another is that you won't be able to take advantage of the variables +and mixins that Bootstrap and Less provide as a framework, which really helps +keep your assets well organized. + +A better way to do this is to pull down a copy of the ArchivesSpace code and +build out a new theme. A good resource on how to do this is +[this video](https://www.youtube.com/watch?v=Uny736mZVnk) . +This video covers the staff frontend UI, but the same steps can be applied to +the public UI as well. + +Also become a little familiar with the +[build system instructions ](/development/dev) + +First, pull down a new copy of ArchivesSpace using git and be sure to checkout +a tag matching the version you're using or wanting to use. + +```shell +$ git clone https://github.com/archivesspace/archivesspace.git +$ git checkout v2.5.2 +``` + +You can start your application development server by executing: + +```shell +$ ./build/run bootstrap +$ ./build/run backend:devserver +$ ./build/run frontend:devserver +$ ./build/run public:devserver +``` + +**Note:** You don't have to run all these commands all the time. The bootstrap +command really only has to be run the first time your pull down the code -- +it will also take awhile. You also don't have to start the frontend or public +if you're not working on those interfaces. Backend does have to be started for +either the public or frontend interfaces to work. ) + +Follow the instructions in the video to create a new theme. A good way is to copy the existing default theme to a new folder and start making your updates. Be sure to take advantage of the existing variables set in the Less files to make your assets nice and organized. + +Once you've updated you theme and have got it working, you can package your application. You can use the ./scripts/build_release to build a totally fresh AS distribution, but you don't need to do that if you've simply made some minor changes to the UI. Instead, use the "./build/run public:war " to compile your assets and package a war file. You can then take this public.war file and replace your ASpace distribution war file. + +Be sure to update your theme setting in the config.rb file and restart ASpace. diff --git a/src/content/docs/fr/customization/xsl.md b/src/content/docs/fr/customization/xsl.md new file mode 100644 index 0000000..5ed0605 --- /dev/null +++ b/src/content/docs/fr/customization/xsl.md @@ -0,0 +1,17 @@ +--- +title: XSL stylesheets +description: Provides information about the XSL stylesheets for transforming ArchivesSpace data to EAC-CPF and EAD exports into HTML or PDF, using Saxon for processing. +--- + +ArchivesSpace includes three stylesheets for you to transform exported data +into human-friendly formats. The stylesheets included are as follows: + +- `as-eac-cpf-html.xsl`: Generates HTML from EAC-CPF records +- `as-ead-html.xsl`: Generates HTML from EAD records +- `as-ead-pdf.xsl`: Generates XSL:FO output from EAD for transformation into PDF + +These stylesheets have been tested and are known to work with +[Saxon](http://saxonica.com/download/download_page.xml) 9.5.1.1 and higher. + +The `as-helper-functions.xsl` stylesheet is required by the other three +stylesheets listed above. diff --git a/src/content/docs/fr/development/dev.md b/src/content/docs/fr/development/dev.md new file mode 100644 index 0000000..b33f69d --- /dev/null +++ b/src/content/docs/fr/development/dev.md @@ -0,0 +1,495 @@ +--- +title: Development environment +description: Guidance for setting up a development environment or ArchivesSpace, including system requirements, supported development platforms, a quickstart guide, and step-by-step instructions. +--- + +System requirements: + +- Java 17 +- [Docker](https://www.docker.com/) & [Docker Compose](https://docs.docker.com/compose/) is optional but makes running MySQL and Solr more convenient +- [Supervisord](http://supervisord.org/) is optional but makes running the development servers more convenient +- [mysql-client](https://www.bytebase.com/reference/mysql/how-to/how-to-install-mysql-client-on-mac-ubuntu-centos-windows/) is required in order to load demo data or other sql dumps onto the database + +Currently supported platforms for development: + +- Linux (although generally only Ubuntu is actually used / tested) +- macOS on Intel (x86_64) +- macOS on Apple silicon (ARM64) _since v4.0.0_ + +:::note[Apple silicon and ArchivesSpace before v4.0.0] +To install versions of ArchivesSpace prior to v4.0.0 with macOS on Apple silicon, see [https://teaspoon-consulting.com/articles/archivesspace-on-the-m1.html](https://teaspoon-consulting.com/articles/archivesspace-on-the-m1.html). +::: + +:::danger[Windows development not supported] +Windows is not supported because of issues building gems with C extensions (such as sassc). +::: + +When installing Java, [OpenJDK](https://openjdk.org/) is strongly recommended. Other vendors may work, but OpenJDK is most extensively used and tested. It is highly recommended that you use a version manager such as [mise](https://mise.jdx.dev/lang/java.html) to install Java (OpenJDK). This has proven to be a reliable way of resolving cross platform issues that have occured via other means of installing Java. + +Installing OpenJDK with mise will look something like: + +```bash +mise use -g java@openjdk-17 +``` + +On Linux/Ubuntu it is generally fine to install from system packages: + +```bash +sudo apt install openjdk-$VERSION-jdk-headless +# example: install 17 +sudo apt install openjdk-17-jdk-headless +# update-java-alternatives can be used to switch between versions +sudo update-java-alternatives --list +sudo update-java-alternatives --set $version +``` + +For [Homebrew](https://brew.sh/) users (macOS, Linux), the OpenJDK distribution from Azul has been reported to work: + +```bash +# install Java v17 for example +brew install --cask zulu@17 +``` + +If using Docker & Docker Compose install them following the official documentation: + +- [https://docs.docker.com/get-docker/](https://docs.docker.com/get-docker/) +- [https://docs.docker.com/compose/install/](https://docs.docker.com/compose/install/) + +_Do not use system packages or any other unofficial source as these have been found to be inconsistent with standard Docker._ + +The recommended way of developing ArchivesSpace is to fork the repository and clone it locally. + +_Note: all commands in the following instructions assume you are in the root directory of your local fork +unless otherwise specified._ + +**Quickstart** + +This is an abridged reference for getting started with a limited explanation of the steps: + +```bash +# Build images (required one time only for most use cases) +docker-compose -f docker-compose-dev.yml build +# Run MySQL and Solr in the background +docker-compose -f docker-compose-dev.yml up --detach +# Download the MySQL connector +cd ./common/lib && wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.30/mysql-connector-java-8.0.30.jar && cd - +# Download all application dependencies +./build/run bootstrap +# OPTIONAL: load dev database +gzip -dc ./build/mysql_db_fixtures/demo.sql.gz | mysql --host=127.0.0.1 --port=3306 -u root -p123456 archivesspace +# Setup the development database +./build/run db:migrate +# Clear out any existing Solr state (only needed after a database setup / restore after previous development) +./build/run solr:reset +# Run the development servers +supervisord -c supervisord/archivesspace.conf +# OPTIONAL: Run a backend (api) test (for checking setup is correct) +./build/run backend:test -Dexample="User model" +``` + +## Step by Step explanation + +### Run MySQL and Solr + +ArchivesSpace development requires MySQL and Solr to be running. The easiest and +recommended way to run them is using the Docker Compose configuration provided by ArchivesSpace. + +Start by building the images. This creates a custom Solr image that includes ArchivesSpace's configuration: + +```bash +docker-compose -f docker-compose-dev.yml build +``` + +_Note: you only need to run the above command once. You would only need to rerun this command if a) +you delete the image and therefore need to recreate it, or b) you make a change to ArchivesSpace's Solr +configuration and therefore need to rebuild the image to include the updated configuration._ + +Run MySQL and Solr in the background: + +```bash +docker-compose -f docker-compose-dev.yml up --detach +``` + +By using Docker Compose to run MySQL and Solr you are guaranteed to have the correct connection settings +and don't otherwise need to define connection settings for MySQL or Solr. + +Verify that MySQL & Solr are running: `docker ps`. It should list the running containers: + +```txt +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +ec76bd09d73b mysql:8.0 "docker-entrypoint.s…" 8 hours ago Up 8 hours 33060/tcp, 0.0.0.0:3307->3306/tcp as_test_db +30574171530f archivesspace/solr:latest "docker-entrypoint.s…" 8 hours ago Up 8 hours 0.0.0.0:8984->8983/tcp as_test_solr +d84a6a183bb0 archivesspace/solr:latest "docker-entrypoint.s…" 8 hours ago Up 8 hours 0.0.0.0:8983->8983/tcp as_dev_solr +7df930293875 mysql:8.0 "docker-entrypoint.s…" 8 hours ago Up 8 hours 0.0.0.0:3306->3306/tcp, 33060/tcp as_dev_db +``` + +To check the servers are online: + +- MYSQL: `mysql -h 127.0.0.1 -u as -pas123 archivesspace` +- SOLR: `curl http://localhost:8983/solr/admin/cores` + +To stop and / or remove the servers: + +```bash +docker-compose -f docker-compose-dev.yml stop # shutdowns the servers (data will be preserved) +docker-compose -f docker-compose-dev.yml rm # deletes the containers (all data will be removed) +``` + +**Advanced: running MySQL and Solr outside of Docker** + +You are not required to use Docker for MySQL and Solr. If you run them another way the default +requirements are: + +- dev MySQL, localhost:3306 create db: archivesspace, username: as, password: as123 +- test MySQL, localhost:3307 create db: archivesspace, username: as, password: as123 +- dev Solr, localhost:8983 create archivesspace core using ArchivesSpace configuration +- test Solr, localhost:8984, create archivesspace core using ArchivesSpace configuration + +The defaults can be changed using [environment variables](https://github.com/archivesspace/archivesspace/blob/master/build/build.xml#L43-L46) located in the build file. + +### Download the MySQL connector + +For licensing reasons the MySQL connector must be downloaded separately: + +```bash +cd ./common/lib +wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.30/mysql-connector-java-8.0.30.jar +cd - +``` + +### Run bootstrap + +The bootstrap task: + + ./build/run bootstrap + +Will bootstrap your development environment by downloading all +dependencies--JRuby, Gems, etc. This one command creates a fully +self-contained development environment where everything is downloaded +within the ArchivesSpace project `build` directory. + +_It is not necessary and generally incorrect to manually install JRuby +& bundler etc. for ArchivesSpace (whether with a version manager or +otherwise)._ + +_The self-contained ArchivesSpace development environment typically does +not interact with other J/Ruby environments you may have on your system +(such as those managed by rbenv or similar)._ + +This is the starting point for all ArchivesSpace development. You may need +to re-run this command after fetching updates, or when making changes to +Gemfiles or other dependencies such as those in the `./build/build.xml` file. + +**Errors running bootstrap** + +```txt + [java] INFO: jetty-9.4.44.v20210927; built: 2021-09-27T23:02:44.612Z; git: 8da83308eeca865e495e53ef315a249d63ba9332; jvm 11+28 + [java] Exiting + [java] LoadError: no such file to load -- rails/commands + [java] require at org/jruby/RubyKernel.java:974 + [java] <main> at script/rails:8 +``` + + ./build/run backend:devserver + ./build/run frontend:devserver + ./build/run public:devserver + ./build/run indexer + +There have been various forms of the same `LoadError`. It's a transient error +that is resolved by rerunning bootstrap. + +```txt + [java] org.jruby.Main -I uri:classloader://META-INF/jruby.home/lib/ruby/stdlib -r + [java] ./siteconf20220407-5224-13f6qi7.rb extconf.rb + [java] sh: /Library/Internet: No such file or directory + [java] sh: line 0: exec: /Library/Internet: cannot execute: No such file or directory + [java] + [java] extconf failed, exit code 126 +``` + +This has been seen on Mac platforms resulting from the installation method +for Java. Installing the OpenJDK via Jabba has been effective in resolving +this error. + +**Advanced: bootstrap & the build directory** + +Running bootstrap will download jars to the build directory, including: + +- jetty-runner +- jruby +- jruby-rack + +Gems will be downloaded to: `./build/gems/jruby/$version/gems/`. + +### Setup the development database + +The migrate task: + +```bash +./build/run db:migrate +``` + +Will setup the development database, creating all of the tables etc. +required by the application. + +There is a task for resetting the database: + +```bash +./build/run db:nuke +``` + +Which will first delete then migrate the database. + +### Loading data fixtures into dev database + +When loading a database into the development MySQL instance always ensure that ArchivesSpace +is **not** running. Stop ArchivesSpace if it is running. Run `./build/run solr:reset` to +clear indexer state (a more thorough explanation of this step is described below). + +If you are loading a database and MySQL has already been used for development you'll want to +drop and create an empty database first. + +```bash +mysql -h 127.0.0.1 -u as -pas123 -e "DROP DATABASE archivesspace" +mysql -h 127.0.0.1 -u as -pas123 -e "CREATE DATABASE IF NOT EXISTS archivesspace DEFAULT CHARACTER SET utf8mb4" +``` + +_Note: you can skip the above step if MySQL was just started for the first time or any time you +have an empty ArchivesSpace (one where `db:migrate` has not been run)._ + +Assuming you have MySQL running and an empty `archivesspace` database available you can proceed +to restore: + +```bash +gzip -dc ./build/mysql_db_fixtures/blank.sql.gz | mysql --host=127.0.0.1 --port=3306 -u root -p123456 archivesspace +./build/run db:migrate +``` + +_Note: The above instructions should work out-of-the-box. If you want to use your own database +and / or have configured MySQL differently then adjust the commands as needed._ + +After the restore `./build/run db:migrate` is run to catch any migration updates. You can now +proceed to run the application dev servers, as described below, with data already +populated in ArchivesSpace. + +### Clear out existing Solr state + +The Solr reset task: + +```bash +./build/run solr:reset +``` + +Will wipe out any existing Solr state. This is not required when setting +up for the first time, but is often required after a database reset (such as +after running the `./build/run db:nuke` task). + +_More specifically what this does is submit a delete all request to Solr and empty +out the contents of the `./build/dev/indexer*_state` directories, which is described +below._ + +### Run the development servers + +Use [Supervisord](http://supervisord.org/) for a simpler way of running the development servers with output +for all servers sent to a single terminal window: + +```bash +# run all of the services +supervisord -c supervisord/archivesspace.conf + +# run in api mode (backend + indexer only) +supervisord -c supervisord/api.conf + +# run just the backend (useful for trying out endpoints that don't require Solr) +supervisord -c supervisord/backend.conf +``` + +ArchivesSpace is started with: + +- the staff interface on [http://localhost:3000/](http://localhost:3000/) +- the PUI on [http://localhost:3001/](http://localhost:3001/) +- the API on [http://localhost:4567/](http://localhost:4567/) + +To stop supervisord: `Ctrl-c`. + +#### Advanced: running the development servers directly + +Supervisord is not required, or ideal for every situation. You can run the development +servers directly via build tasks: + +```bash +./build/run backend:devserver # This is the REST API +./build/run frontend:devserver # This is the staff user interface +./build/run public:devserver # This is the public user interface +./build/run indexer # This is the indexer (converts ASpace records to Solr Docs and ships to Solr) +``` + +These should be run in different terminal sessions and do not need to be run +in a specific order or are all required. + +_An example use case for running a server directly is to use the pry debugger._ + +#### Advanced: debugging with pry + +To debug with pry you cannot use supervisord to run the application devserver, +however you can mix and match: + +```bash +# run the backend and indexer with supervisord +supervisord -c supervisord/api.conf + +# in a separate terminal run the frontend directly +./build/run frontend:devserver +``` + +Add `require 'pry-debugger-jruby'; binding.pry` to set breakpoints in the code. This can also be used in views: +`<% require 'pry-debugger-jruby'; binding.pry %>`. Using pry you can easily inspect the `request`, `params` and +in scope instance variables that are available. Typical debugger commands are available: + +- `step`: Step execution into the next line or method. Takes an optional numeric argument to step multiple times. +- `next`: Step over to the next line within the same frame. Takes an optional numeric argument to step multiple times. Differs from step in that it always stays within the same frame (e.g. does not go into other method calls). +- `finish`: Execute until current stack frame returns. +- `continue`: Continue program execution and end the Pry session. +- `puts caller.join("\n")`: Get the current stacktrace. + +See also [pry-debugger-jruby docs](https://gitlab.com/ivoanjo/pry-debugger-jruby). + +#### Advanced: development servers and the build directory + + ./build/run db:migrate + +Running the developments servers will create directories in `./build/dev`: + +- indexer_pui_state: latest timestamps for PUI indexer activity +- indexer_state: latest timestamps for (SUI) indexer activity +- shared: background job files + + ./build/run db:nuke + +_Note: the folders will be created as they are needed, so they may not all be present +at all times._ + +#### Accessing development servers from other devices on the local network + +You can access the ArchivesSpace development servers from other devices on your local network. This is especially useful for testing on mobile operating systems. + +##### Prerequisites + +1. Your development machine and the other device must be on the same WiFi network +2. The ArchivesSpace development servers must be running on the development machine + +##### Steps + +1. Get your development machine's local IP address + + On macOS: + + ```bash + ipconfig getifaddr en0 + ``` + + On Linux: + + ```bash + hostname -I | awk '{print $1}' + ``` + + This returns something like `134.192.0.47`. + +2. Start the [development servers](#run-the-development-servers) + + The development servers bind to `0.0.0.0` by default, making them accessible from other devices on the network (see the [frontend binding example](https://github.com/archivesspace/archivesspace/blob/f77dec627cd1feac77e4b67f9242d617efe80e94/build/build.xml#L899)). + +3. **Access from another device** + + On the other device, open a web browser and navigate to your development machine's IP address with the appropriate port, ie: `http://<your-local-ip>:<port>/`. + + So for IP address `134.192.0.47`: + - Staff interface: `http://134.192.0.47:3000/` + - Public interface: `http://134.192.0.47:3001/` + - API: `http://134.192.0.47:4567/` + +## Running the tests + +### Backend tests + +_By default the tests are configured to run using a separate MySQL & Solr from the +development servers. This means that the development and test environments will not +interfere with each other._ + +```bash +# run the backend / api tests +./build/run backend:test +``` + +You can also run a single spec file with: + +```bash +./build/run backend:test -Dspec="myfile_spec.rb" +``` + +Or a single example with: + +```bash +./build/run backend:test -Dexample="does something important" +``` + +Or by file line with: + +```bash +./build/run backend:test -Dspec="myfile_spec.rb:123" +``` + +There are specific instructions and requirements for the [UI tests](/development/ui_test) to work. + +**Advanced: tests and the build directory** + +Running the tests may create directories in `./build/test`. These will be +the same as for the development servers as described above. + +## Coverage reports + +You can run the coverage reports using: + + ./build/run coverage + +This runs all of the above tests in coverage mode and, when the run +finishes, produces a set of HTML reports within the `coverage` +directory in your ArchivesSpace project directory. + +## Linting and formatting with Rubocop + +If you are editing or adding source files that you intend to contribute via a pull request, +you should make sure your changes conform to the layout and style rules by running: + + ./build/run rubocop + +Most errors can be auto-corrected by running: + + ./build/run rubocop -Dcorrect=true + +## Submitting a Pull Request + +When you have code ready to be reviewed, open a pull request to ask for it to be +merged into the codebase. + +To help make the review go smoothly, here are some general guidelines: + +- **Your pull request should address a single issue.** + It's better to split large or complicated PRs into discrete steps if possible. This + makes review more manageable and reduces the risk of conflicts with other changes. +- **Give your pull request a brief title, referencing any JIRA or Github issues resolved + by the pull request.** + Including JIRA numbers (e.g. 'ANW-123') explicitly in your pull request title ensures the + PR will be linked to the original issue in JIRA. Similarly, referencing GitHub issue numbers + (e.g. 'Fixes #123') will automatically close that issue when the PR is merged. +- **Fill out as much of the Pull Request template as is possible/relevant.** + This makes it easier to understand the full context of your PR, including any discussions or supporting documentation that went into developing the functionality or resolving the bug. + +## Building a distribution + +See: [Building an Archivesspace Release](/development/release) for information on building a distribution. + +## Generating API documentation + +See: [Building an Archivesspace Release](/development/release) for information on building the documentation. diff --git a/src/content/docs/fr/development/docker.md b/src/content/docs/fr/development/docker.md new file mode 100644 index 0000000..8168231 --- /dev/null +++ b/src/content/docs/fr/development/docker.md @@ -0,0 +1,42 @@ +--- +title: Docker +description: A guide to using the Docker configuration with ArchivesSpace. +--- + +The [Docker](https://www.docker.com/) configuration is used to create [automated builds](https://hub.docker.com/r/archivesspace/archivesspace/) on Docker Hub, which are deployed to [the latest version](http://test.archivesspace.org) when the build completes. + +## Custom builds + +Run ArchivesSpace with MySQL, external Solr and a Web Proxy. Switch to the +branch you want to build: + +```bash +#if you already have running containers and want to clear them out +docker-compose stop +docker-compose rm + +#build the local image +docker-compose build # needed whenever the branch is changed and ready to test +docker-compose up + +#running specific containers +docker-compose up -d db solr # in background +docker-compose up app web # in foreground +>to access running container +docker exec -it archivesspace_app_1 bash +``` + +## Sharing an image + +To share the build image the easiest way is to create an account on [Docker Hub](https://hub.docker.com/). Next retag the image and push to the hub account: + +```bash +DOCKER_ID_USER=example +TAG=awesome-updates +docker tag archivesspace_app:latest $DOCKER_ID_USER/archivesspace:$TAG +docker push $DOCKER_ID_USER/archivesspace:$TAG +``` + +To download the image: `docker pull example/archivesspace:awesome-updates`. + +--- diff --git a/src/content/docs/fr/development/e2e_tests.md b/src/content/docs/fr/development/e2e_tests.md new file mode 100644 index 0000000..2a78b10 --- /dev/null +++ b/src/content/docs/fr/development/e2e_tests.md @@ -0,0 +1,152 @@ +--- +title: ArchivesSpace End-to-End Test Suite +description: Instructions on running the end-to-end test suite. +--- + +For more context on the [End-to-End test suite](https://github.com/archivesspace/archivesspace/tree/master/e2e-tests) and how to contribute tests, see our [wiki-page](https://archivesspace.atlassian.net/wiki/spaces/ADC/pages/4606590990/How+to+contribute+End+to+End+test+scenarios). + +## Recommended setup + +### Using a version manager + +The required Ruby version for the e2e test application is documented in `[./.ruby-version](./.ruby-version)`. + +It is strongly recommended to use a version manager (such as [mise](https://mise.jdx.dev/)) to be able to switch to any version that a given project requires. + +#### mise + +We recommend using [mise](https://mise.jdx.dev/) to manage Ruby (and other runtimes). Installation instructions are available at [Getting started](https://mise.jdx.dev/getting-started.html). + +#### Alternatives to `mise` + +If you wish to use a different Ruby manager or installation method, see [Ruby's installation documentation](https://www.ruby-lang.org/en/documentation/installation/). + +### Installation + +From the ArchivesSpace root directory, navigate to the e2e test application, then install Ruby, Bundler, and the application dependencies: + +```sh +# 1. Navigate to e2e-tests directory +cd e2e-tests + +# 2. Install Ruby at the version specified in ./.tool-versions +mise install + +# 3. Install the Bundler dependency manager +gem install bundler + +# 4. Install project dependencies +bundle install +``` + +## Running the tests locally + +### Just working on the e2e tests with Docker + +If you are just working on e2e tests and not touching the ArchivesSpace application, you can run e2e tests locally against the latest ArchivesSpace `master` branch build using Docker. + +#### Install Docker Desktop + +[Docker Desktop](https://www.docker.com/get-started/) is a one-click-install application for Linux, Mac, and Windows. It provides both terminal and GUI access to Docker. Download and install the appropriate version for your operating system from the link above. You can also use alternative software for running Docker containers, such as [OrbStack](https://orbstack.dev/) for macOS. + +#### Run the latest ArchivesSpace Docker image + +```sh +# Get the latest ArchivesSpace `master` branch build +docker compose pull + +# Start ArchivesSpace servers +docker compose up +``` + +Verify the servers are running by opening [http://localhost:8080](http://localhost:8080) in a browser. + +### Working with an ArchivesSpace development environment + +You can run the e2e test suite against your local ArchivesSpace development environment. But be aware that your database, Solr index, and any configuration changes will need to be reset. + +#### Reset your database and Solr index + +Make sure your ArchivesSpace instance has a [blank database](https://docs.archivesspace.org/development/dev/#loading-data-fixtures-into-dev-database) and [blank solr index](https://docs.archivesspace.org/development/dev/#clear-out-existing-solr-state). + +#### Restore default configuration options (except for `AppConfig[:db_url]`) + +Make sure you override any local changes to the default configuration options (via ../common/config/config.rb) by commenting them out or deleting them, except for `AppConfig[:db_url]` (which is required for using the MySQL database). + +#### Run the frontend dev server + +Start the `frontend:devserver` as described [here](https://docs.archivesspace.org/development/dev/#run-the-development-servers). Verify it is running by opening [http://localhost:3000/](http://localhost:3000/) in your browser. + +#### Run the public dev server + +Start the `public:devserver` as described [here](https://docs.archivesspace.org/development/dev/#run-the-development-servers). Verify it is running by opening [http://localhost:3001/](http://localhost:3001/) in your browser. + +#### Set the `STAFF_URL` environment variable + +Set your `STAFF_URL` environment variable to point the e2e tests at the local development server: + +```sh +export STAFF_URL='http://localhost:3000' +``` + +#### Set the `PUBLIC_URL` environment variable + +Set your `PUBLIC_URL` environment variable to point the e2e tests at the local public interface: + +```sh +export PUBLIC_URL='http://localhost:3001' +``` + +## Running tests + +After setting the appropriate `STAFF_URL` and `PUBLIC_URL` environment variables as described above, run the desired test(s) according to the following commands. + +### All test files at once + +```sh +bundle exec cucumber staff_features/ +``` + +### All scenarios in a specific file + +```sh +bundle exec cucumber staff_features/assessments/assessment_create.feature +``` + +### A specific scenario in a specific file + +```sh +bundle exec cucumber staff_features/assessments/assessment_create.feature --name 'Assessment is created' +``` + +## Debugging + +Add a `byebug` statement in any `.rb` file to set a breakpoint and start a debugging session in the console while running. See more [here](https://github.com/deivid-rodriguez/byebug). Don't forget to remove any `byebug` statements before a `git push`... + +If you need to see the browser while running the test scenario and debugging, add a `HEADLESS=''` argument, as in: + +```sh +bundle exec cucumber HEADLESS='' staff_features/ +``` + +## Linters + +This test suite uses two linters, [`cuke_linter`](https://github.com/enkessler/cuke_linter) and [`rubocop`](https://rubocop.org/), to maintain code quality. + +```sh +# Lints Cucumber .feature files +bundle exec cuke_linter + +# Lints Ruby .rb files +bundle exec rubocop +``` + +## Editor integration (optional) + +ArchivesSpace provides optional VS Code workspace tasks that can run the end-to-end test suite without manually setting environment variables or changing directories. + +These tasks execute the same cucumber commands described above and are simply a convenience wrapper around the documented command-line workflow. + +Setup instructions are documented in the **VS Code guide** [here](https://docs.archivesspace.org/development/vscode/). + +Contributors not using VS Code can ignore this section and run the tests directly from the command line. diff --git a/src/content/docs/fr/development/ead-exporter.md b/src/content/docs/fr/development/ead-exporter.md new file mode 100644 index 0000000..55cc9cb --- /dev/null +++ b/src/content/docs/fr/development/ead-exporter.md @@ -0,0 +1,31 @@ +--- +title: Repository EAD Exporter +description: A guide to export all published resources' EAD within a specified repository into a single zip archive. +--- + +Exports all published resource record EAD XML files associated with a single +repository into a zip archive. This zip file will be saved in the ArchivesSpace +data directory (as defined in `config.rb`) and include the repository id in the +filename. + +## Usage + +```sh +./scripts/ead_export.sh user password repository_id +``` + +A best practice would be to put the password in a hidden file such as: + +```sh +touch ~/.aspace_password +chmod 0600 ~/.aspace_password +vi ~/.aspace_password # enter your password +``` + +Then call the script like: + +```sh +./scripts/ead_export.sh user $(cat /home/user/.aspace_password) repository_id +``` + +This way you avoid directly exposing it on the command line or in crontab etc. diff --git a/src/content/docs/fr/development/index.md b/src/content/docs/fr/development/index.md new file mode 100644 index 0000000..e0fdd9d --- /dev/null +++ b/src/content/docs/fr/development/index.md @@ -0,0 +1,13 @@ +--- +title: Development +description: The index to the development section of the ArchivesSpace technical documentation. +--- + +- [Running a development version of ArchivesSpace](./dev.html) +- [Building an ArchivesSpace release](./release.html) +- [Docker](./docker.html) +- [DB versions listed by release](./release_schema_versions.html) +- [User Interface Test Suite](./ui_test.html) +- [Upgrading Rack for ArchivesSpace](./development/jruby-rack-build.html) +- [ArchivesSpace Releases](./releases.html) +- [Using the VS Code editor for local development](./vscode.html) diff --git a/src/content/docs/fr/development/jruby-rack-build.md b/src/content/docs/fr/development/jruby-rack-build.md new file mode 100644 index 0000000..9db3b5e --- /dev/null +++ b/src/content/docs/fr/development/jruby-rack-build.md @@ -0,0 +1,96 @@ +--- +title: Upgrading Rack +description: A guide to upgrading Rack. +--- + +- Install local JRuby (match aspace version, currently: 9.2.12.0) and switch to it. +- Install Maven. +- Download jruby-rack. + +```shell +git checkout 1.1-stable +# install bundler version to match 1.1-stable Gemfile.lock +gem install bundler --version=1.14.6 +``` + +Should result in: + +``` +Fetching bundler-1.14.6.gem +Successfully installed bundler-1.14.6 +Parsing documentation for bundler-1.14.6 +Installing ri documentation for bundler-1.14.6 +Done installing documentation for bundler after 5 seconds +1 gem installed +``` + +Set environment to target rack version (the version being upgraded to): + +```shell +export RACK_VERSION=2.2.3 +bundle +``` + +Should result in: + +``` +Fetching gem metadata from https://rubygems.org/............. +Fetching version metadata from https://rubygems.org/.. +Resolving dependencies... +Installing rake 10.4.2 +Using bundler 1.14.6 +Using diff-lcs 1.2.5 +Installing jruby-openssl 0.9.21 (java) +Using rack 2.2.3 (was 1.6.8) +Using rspec-core 2.14.8 +Using rspec-mocks 2.14.6 +Using appraisal 0.5.2 +Using rspec-expectations 2.14.5 +Using rspec 2.14.1 +Bundle complete! 5 Gemfile dependencies, 10 gems now installed. +Use `bundle show [gemname]` to see where a bundled gem is installed. +``` + +This will have bumped the Rack version in Gemfile.lock: + +```diff +diff --git a/Gemfile.lock b/Gemfile.lock +index 493c667..f016925 100644 +--- a/Gemfile.lock ++++ b/Gemfile.lock +@@ -6,7 +6,7 @@ GEM + rake + diff-lcs (1.2.5) + jruby-openssl (0.9.21-java) +- rack (1.6.8) ++ rack (2.2.3) + rake (10.4.2) + rspec (2.14.1) + rspec-core (~> 2.14.0) +@@ -23,7 +23,7 @@ PLATFORMS + DEPENDENCIES + appraisal + jruby-openssl (~> 0.9.20) +- rack (~> 1.6.8) ++ rack (= 2.2.3) + rake (~> 10.4.2) + rspec (~> 2.14.1) +``` + +Build the jruby-rack jar: + +```bash +bundle exec jruby -S rake clean gem SKIP_SPECS=true +``` + +This creates `target/jruby-rack-1.1.21.jar` with Rack 2.2.3. + +Upload the jar to the public s3 bucket, specifying the rack version: + +```bash +aws s3 cp target/jruby-rack-1.1.21.jar \ + s3://as-public-shared-files/jruby-rack-1.1.21_rack-2.2.3.jar \ + --profile archivesspace +``` + +Finally, update `rack_version` in the aspace `build.xml` file. diff --git a/src/content/docs/fr/development/release.md b/src/content/docs/fr/development/release.md new file mode 100644 index 0000000..b157437 --- /dev/null +++ b/src/content/docs/fr/development/release.md @@ -0,0 +1,263 @@ +--- +title: Building a release +description: How to build an ArchivesSpace release. +--- + +- [Pre-release steps](#pre-release-steps) +- [Build the docs](#build-and-publish-the-api-and-yard-docs) +- [Build the release](#building-a-release-yourself) +- [Post the release with release notes](#create-the-release-with-notes) +- [Post-release updates](#post-release-updates) + +## Clone the git repository + +When building a release it is important to start from a clean repository. The +safest way of ensuring this is to clone the repo: + +```shell +git clone https://github.com/archivesspace/archivesspace.git +``` + +## Checkout the release branch and create release tag + +If you are building a major or minor version (see [https://semver.org](https://semver.org)), +start by creating a branch for the release and all future patch releases: + +```shell +git checkout -b release-v1.0.x +git tag v1.0.0 +``` + +If you are building a patch version, just check out the existing branch and see below: + +```shell +git checkout release-v1.0.x +``` + +Patch versions typically arise because a regression or critical bug has arisen since +the last major or minor release. We try to ensure that the "hotfix" is merged into both +master and the release branch without the need to cherry-pick commits from one branch to +the other. The reason is that cherry-picking creates a new commit (with a new commit id) +that contains identical changes, which is not optimal for the repository history. + +It is therefore preferable to start from the release branch when creating a "hotfix" +that needs to be merged into both the release branch and master. The Pull Request should +then be based on the release branch. After that Pull Request has been through Code review, +QA and merged, a second Pull Request should be created to merge the updated release branch +to master. + +Consider the following scenario. The current production release is v1.0.0 and a critical +bug has been discovered. In the time since v1.0.0 was released, new features have been +added to the master branch, intended for release in v1.1.0: + +```shell +git checkout -b oh-no-some-migration-corrupts-some-data origin/release-v1.0.0 +( fixes problem ) +git commit -m "fix bad migration and add a migration to repair corrupted data" +gh pr create -B release-v1.0.x --web +( PR is reviewed and merged to the release branch) +git checkout release-v1.0.x +git pull +git tag v1.0.1 +gh pr create -B master --web +( PR is reviewed and merged to the master branch) +``` + +## Pre-release steps + +### Run the ArchivesSpace rake tasks to check for issues + +Before proceeding further, it’s a good idea to check that there aren’t missing +translations or multiple gem versions. + +1. Bootstrap your current development environment on the latest master branch + by downloading all dependencies--JRuby, Gems, Solr, etc. + + ```shell + build/run bootstrap + ``` + +2. Run the following checks (recommended): + + ```shell + build/run rake -Dtask=check:multiple_gem_versions + ``` + +3. If multiple gem versions are reported, that should be addressed prior to moving on. + +## Build and publish the API and Yard Docs + +API docs are built using the submodule in `docs/slate` and Docker. +YARD docs are built using the YARD gem. At this time, they cover a small +percentage of the code and are not especially useful. + +### Build the API docs + +1. API documentation depends on the [archivesspace/slate](https://github.com/archivesspace/slate) submodule + and on Docker. Slate cannot run on JRuby. + + ```shell + git submodule init + git submodule update + ``` + +2. Run the `doc:api` task to generate Slate API and Yard documentation. (Note: the + API generation requires a DB connection with standard enumeration values.) + + ```shell + ARCHIVESSPACE_VERSION=X.Y.Z APPCONFIG_DB_URL=$APPCONFIG_DB_URL build/run doc:api + ``` + + This generates `docs/slate/source/index.html.md` (Slate source document). + +3. (Optional) Run a docker container to preview API docs. + + ```shell + docker-compose -f docker-compose-docs.yml up + ``` + + Visit `http://localhost:4568` to preview the api docs. + +4. Build the static api files. The api markdown document should already be in `docs/slate/source` (step 2 above). + The api markdown will be rendered to html and moved to `docs/build/api`. + ```shell + docker run --rm --name slate -v $(pwd)/docs/build/api:/srv/slate/build -v $(pwd)/docs/slate/source:/srv/slate/source slatedocs/slate build + ``` + +### Build the YARD docs + +1. Build the YARD docs in the `docs/build/doc` directory: + + ```shell + ./build/run doc:yardoc + ``` + +### Commit built docs and push to Github pages + +1. Double check that you are on a release branch (we don't need this stuff in master). Commit the newly built documentation and push it in the `gh-pages` branch only: + + ```shell + git add docs/build + git commit -m "release-vx.y.z api and yard documentation" + ``` + + Use `git subtree` to push the documentation to the `gh-pages` branch: + + ```shell + git subtree push --prefix docs/build origin gh-pages + ``` + + Published documents should appear a short while later at: + [http://archivesspace.github.io/archivesspace/api](http://archivesspace.github.io/archivesspace/api) + [http://archivesspace.github.io/archivesspace/doc](http://archivesspace.github.io/archivesspace/doc) + + Note: if the push command fails you may need to delete `gh-pages` in the remote repo: + + ```shell + git push origin :gh-pages + ``` + + **Note:** do not push the docs/build directory to the release branch, as it is only meant to be maintained in the `gh-pages` branch. + +## Building a release yourself + +1. Building the actual release is very simple. Run the following: + + ```shell + ./scripts/build_release vX.X.X + ``` + + Replace X.X.X with the version number. This will build and package a release + in a zip file. + +## Building a release on Github + +1. There is no need to build the release yourself. Just push your tag to Github + and trigger the `release` workflow: + ```shell + git push vX.X.X + ``` + Replace X.X.X with the version number. The release will be created as a **draft**, it will not be automatically published. + +## Create the Release with Notes + +### Build the release notes + +**As of v3.4.0, it should no longer necessary to build release notes manually.** + +To manually generate release notes: + +Create a deployment token on your [github developer settings](https://github.com/settings/tokens). + +```shell +export GITHUB_TOKEN={YOUR DEPLOYMENT TOKEN ON GITHUB} +./build/run doc:release_notes -Dcurrent_tag=v3.4.0 -Doutfile=RELEASE_NOTES.md -Dtoken=$GITHUB_TOKEN +``` + +#### Edit Release Page As Neccessary + +If there are any special considerations add them to the release page manually. Special considerations +might include changes that will require 3rd party plugins to be updated or a +that a full reindex is required. + +Example content: + +```md +This release requires a **full reindex** of ArchivesSpace for all functionality to work +correctly. Please follow the [instructions for reindexing](/administration/indexes) +before starting ArchivesSpace with the new version. +``` + +## Post release updates + +After a release has been put out it's time for some maintenance before the next +cycle of development clicks into full gear. Consider the following, depending on +current team consensus: + +### Branches + +Delete merged and stale branches in Github as appropriate. + +### Milestones + +Close the just-released Milestone, adding a due date of today's date. Create a +new Milestone for the anticipated next release (this can be changed later if the +version numbering is changed for some reason). + +### Test Servers + +Review existing test servers, and request the removal of any that are no longer +needed (e.g. feature branches that have been merged for the release). + +### GitHub Issues + +Review existing opening GH issues and close any that have been resolved by +the new release (linking to a specific PR if possible). For the remaining open +issues, review if they are still a problem, apply labels, link to known JIRA +issues, and add comments as necessary/relevant. + +### Accessibility Scan + +Run accessibility scans for both the public and staff sites and file a ticket +for any new and ongoing accessibility errors. + +### PR Assignments + +Begin assigning queued PRs to members of the Core Committers group, making +sure to include the appropriate milestone for the anticipated next release. + +### Dependencies + +#### Gems + +Take a look at all the Gemfile.lock files ( in backend, frontend, public, +etc ) and review the gem versions. Pay close attention to the Rails & Friends +( ActiveSupport, ActionPack, etc ), Rack, and Sinatra versions and make sure +there have not been any security patch versions. There usually are, especially +since Rails sends fix updates rather frequently. + +To update the gems, update the version in Gemfile, delete the Gemfile.lock, and +run ./build/run bootstrap to download everything. Then make sure your test +suite passes. + +Once everything passes, commit your Gemfiles and Gemfile.lock files. diff --git a/src/content/docs/fr/development/release_schema_versions.md b/src/content/docs/fr/development/release_schema_versions.md new file mode 100644 index 0000000..42a75d1 --- /dev/null +++ b/src/content/docs/fr/development/release_schema_versions.md @@ -0,0 +1,41 @@ +--- +title: Database versions by release +description: A list of ArchivesSpace releases and their corresponding database versions. +--- + +| Release | DB Version | +| ------- | ---------- | +| 1.1.0 | 33 | +| 1.1.1 | 35 | +| 1.1.2 | 35 | +| 1.2.0 | 38 | +| 1.3.0 | 56 | +| 1.4.0 | 59 | +| 1.4.1 | 59 | +| 1.4.2 | 59 | +| 1.5.0 | 74 | +| 1.5.1 | 74 | +| 1.5.2 | 75 | +| 1.5.3 | 75 | +| 1.5.4 | 75 | +| 2.0.0 | 84 | +| 2.0.1 | 84 | +| 2.1.0 | 92 | +| 2.1.1 | 92 | +| 2.1.2 | 92 | +| 2.2.0 | 93 | +| 2.2.1 | 94 | +| 2.2.2 | 95 | +| 2.3.0 | 97 | +| 2.3.1 | 97 | +| 2.3.2 | 97 | +| 2.4.0 | 100 | +| 2.4.1 | 100 | +| 2.5.0 | 102 | +| 2.5.1 | 102 | +| 2.5.2 | 108 | +| 2.6.0 | 120 | +| 2.7.0 | 126 | +| 2.7.1 | 129 | +| 2.8.0 | 135 | +| 2.8.1 | 138 | diff --git a/src/content/docs/fr/development/releases.md b/src/content/docs/fr/development/releases.md new file mode 100644 index 0000000..2b31a65 --- /dev/null +++ b/src/content/docs/fr/development/releases.md @@ -0,0 +1,192 @@ +--- +title: Releases +description: A list of Archivesspace releases, their release dates, schema numbers, and links to the release on github. +--- + +3.4.0 May 24, 2023 +The schema number for this release is 172. +https://github.com/archivesspace/archivesspace/tree/v3.4.0 + +3.3.1 Oct 4, 2022 +The schema number for this release is 164 +https://github.com/archivesspace/archivesspace/tree/v3.3.1 + +3.2.0 February 4, 2022 +The schema number for this release is 159. +https://github.com/archivesspace/archivesspace/releases/download/v3.2.0/archivesspace-v3.2.0.zip + +3.1.1 Novemver 19, 2021 +The schema number for this release is 157. +https://github.com/archivesspace/archivesspace/releases/download/v3.1.0/archivesspace-v3.1.1.zip + +3.1.0 September 20, 2021 +The schema number for this release is 157. +https://github.com/archivesspace/archivesspace/releases/download/v3.1.0/archivesspace-v3.1.0.zip + +3.0.2 August 11, 2021 +The schema number for this release is 148. +https://github.com/archivesspace/archivesspace/releases/download/v3.0.2/archivesspace-v3.0.2.zip + +3.0.1 June 4, 2021 +The schema number for this release is 147. +https://github.com/archivesspace/archivesspace/releases/download/v3.0.1/archivesspace-v3.0.1.zip + +3.0.0 May 10, 2021 +The schema number for this release is 147. +[Bug in Release] + +2.8.1 Nov 11, 2020. +The schema number for this release is 138. +https://github.com/archivesspace/archivesspace/releases/download/v2.8.1/archivesspace-v2.8.1.zip + +2.8.0 Jul 16, 2020. +The schema number for this release is 135. +https://github.com/archivesspace/archivesspace/releases/download/v2.8.0/archivesspace-v2.8.0.zip + +2.7.1 Feb 14, 2020. +The schema number for this release is 129. +https://github.com/archivesspace/archivesspace/releases/download/v2.7.1/archivesspace-v2.7.1.zip + +2.7.0 Oct 9, 2019. +The schema number for this release is 126. +https://github.com/archivesspace/archivesspace/releases/download/v2.7.0/archivesspace-v2.7.0.zip + +2.6.0 May 30, 2019. +The schema number for this release is 120. +https://github.com/archivesspace/archivesspace/releases/download/v2.6.0/archivesspace-v2.6.0.zip + +2.5.2 Jan 15, 2019. +The schema number for this release is 108. +https://github.com/archivesspace/archivesspace/releases/download/v2.5.2/archivesspace-v2.5.2.zip + +2.5.1 Oct 17, 2018. +This release includes no new database migrations. +https://github.com/archivesspace/archivesspace/releases/download/v2.5.1/archivesspace-v2.5.1.zip + +2.5.0 Aug 10, 2018. +The schema number for this release is 102. +https://github.com/archivesspace/archivesspace/releases/download/v2.5.0/archivesspace-v2.5.0.zip + +2.4.1 Jun 22, 2018. +This release includes no new database migrations. +https://github.com/archivesspace/archivesspace/releases/download/v2.4.1/archivesspace-v2.4.1.zip + +2.4.0 Jun 7, 2018. +The schema number for this release is 100. +https://github.com/archivesspace/archivesspace/releases/download/v2.4.0/archivesspace-v2.4.0.zip + +2.3.2 Mar 27, 2018. +This release includes no new database migrations. +https://github.com/archivesspace/archivesspace/releases/download/v2.3.2/archivesspace-v2.3.2.zip + +2.3.1 Feb 28, 2018. +This release includes no new database migrations. +https://github.com/archivesspace/archivesspace/releases/download/v2.3.1/archivesspace-v2.3.1.zip + +2.3.0 Feb 5, 2018. +The schema number for this release is 97. +https://github.com/archivesspace/archivesspace/releases/download/v2.3.0/archivesspace-v2.3.0.zip + +2.2.2 Dec 13, 2017. +The schema number for this release is 95. +https://github.com/archivesspace/archivesspace/releases/download/v2.2.2/archivesspace-v2.2.2.zip + +2.2.0 Oct 12, 2017. +The schema number for this release is 93. +https://github.com/archivesspace/archivesspace/releases/download/v2.2.0/archivesspace-v2.2.0.zip + +2.1.2 Sep 1, 2017. +The schema number for this release is 92. +https://github.com/archivesspace/archivesspace/releases/download/v2.1.2/archivesspace-v2.1.2.zip + +2.1.1 Aug 16, 2017. +The schema number for this release is 92. +https://github.com/archivesspace/archivesspace/releases/download/v2.1.1/archivesspace-v2.1.1.zip + +2.1.0 Jul 18, 2017. +The schema number for this release is 92. +https://github.com/archivesspace/archivesspace/releases/download/v2.1.0/archivesspace-v2.1.0.zip + +2.0.1 May 2, 2017. +The schema number for this release is 84. +https://github.com/archivesspace/archivesspace/releases/download/v2.0.1/archivesspace-v2.0.1.zip + +2.0.0 Apr 18, 2017. +The schema number for this release is 84. +https://github.com/archivesspace/archivesspace/releases/download/v2.0.0/archivesspace-v2.0.0.zip + +1.5.4 Mar 16, 2017. +The schema number for this release is 75. +https://github.com/archivesspace/archivesspace/releases/download/v1.5.4/archivesspace-v1.5.4.zip + +1.5.3 Feb 15, 2017. +The schema number for this release is 75. +https://github.com/archivesspace/archivesspace/releases/download/v1.5.3/archivesspace-v1.5.3.zip + +1.5.2 Dec 8, 2016. +The schema number for this release is 75. +https://github.com/archivesspace/archivesspace/releases/download/v1.5.2/archivesspace-v1.5.2.zip + +1.5.1 Jul 29, 2016. +The schema number for this release is 74. +https://github.com/archivesspace/archivesspace/releases/download/v1.5.1/archivesspace-v1.5.1.zip + +1.5.0 Jul 20, 2016. +The schema number for this release is 74. +https://github.com/archivesspace/archivesspace/releases/download/v1.5.0/archivesspace-v1.5.0.zip + +1.4.2 Oct 27, 2015. +The schema number for this release is 59. +https://github.com/archivesspace/archivesspace/releases/download/v1.4.2/archivesspace-v1.4.2.zip + +1.4.1 Oct 13, 2015. +The schema number for this release is 59. +https://github.com/archivesspace/archivesspace/releases/download/v1.4.1/archivesspace-v1.4.1.zip + +1.4.0 Sep 29, 2015. +The schema number for this release is 59. +https://github.com/archivesspace/archivesspace/releases/download/v1.4.0/archivesspace-v1.4.0.zip + +1.3.0 Jun 30, 2015. +The schema number for this release is 56. +https://github.com/archivesspace/archivesspace/releases/download/v1.3.0/archivesspace-v1.3.0.zip + +1.2.0 Mar 30, 2015. +The schema number for this release is 38. +https://github.com/archivesspace/archivesspace/releases/download/v1.2.0/archivesspace-v1.2.0.zip + +1.1.2 Jan 21, 2015. +The schema number for this release is 35. +https://github.com/archivesspace/archivesspace/releases/download/v1.1.2/archivesspace-v1.1.2.zip + +1.1.1 Jan 6, 2015. +The schema number for this release is 35. +https://github.com/archivesspace/archivesspace/archive/refs/tags/v1.1.1.zip (only source available) + +1.1.0 Oct 20, 2014. +The schema number for this release is 33. +https://github.com/archivesspace/archivesspace/releases/download/v1.1.0/archivesspace-v1.1.0.zip + +1.0.9 May 13, 2014. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.9/archivesspace-v1.0.9.zip + +1.0.7.1 March 7, 2014. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.7.1/archivesspace-v1.0.7.1.zip + +1.0.4 Jan 14, 2014. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.4/archivesspace-v1.0.4.zip + +1.0.2 Nov 26, 2013. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.2/archivesspace-v1.0.2.zip + +1.0.1 Nov 1, 2013. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.1/archivesspace-v1.0.1.zip + +1.0.0 Oct 4, 2013. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.0/archivesspace-v1.0.0.zip diff --git a/src/content/docs/fr/development/ui_test.md b/src/content/docs/fr/development/ui_test.md new file mode 100644 index 0000000..c64d6a6 --- /dev/null +++ b/src/content/docs/fr/development/ui_test.md @@ -0,0 +1,140 @@ +--- +title: UI tests +description: Instructions on running automated browser tests with Selenium on the ArchivesSpace UI on both Firefox and Chrome. +--- + +ArchivesSpace's staff and public interfaces use [Selenium](http://docs.seleniumhq.org/) to run automated browser tests. These tests can be run using [Firefox via geckodriver](https://firefox-source-docs.mozilla.org/testing/geckodriver/geckodriver/index.html) and [Chrome](https://sites.google.com/a/chromium.org/chromedriver/home) (either regular Chrome or headless). + +## UI tests with firefox (default) + +Firefox is the default used in our [CI workflows](https://github.com/archivesspace/archivesspace/actions). + +On Ubuntu Linux 22.04 or later, the included Firefox deb package is a transition package that actually installs Firefox through [snap](https://snapcraft.io/). Snap has security restrictions that do not work with automated testing without additional configuration. + +To uninstall the Firefox snap package and reinstall it as a traditional deb package on Ubuntu Linux use: + +```bash +# remove old snap firefox package (if installed) +sudo snap remove firefox + +# create a keyring directory (if not existing) +sudo install -d -m 0755 /etc/apt/keyrings + +# download mozilla key and add it to the keyring +wget -q https://packages.mozilla.org/apt/repo-signing-key.gpg -O- | sudo tee /etc/apt/keyrings/packages.mozilla.org.asc > /dev/null + +# set high priority for the mozilla pakcages +echo "deb [signed-by=/etc/apt/keyrings/packages.mozilla.org.asc] https://packages.mozilla.org/apt mozilla main" | sudo tee -a /etc/apt/sources.list.d/mozilla.list > /dev/null +echo ' +Package: * +Pin: origin packages.mozilla.org +Pin-Priority: 1000 +' | sudo tee /etc/apt/preferences.d/mozilla + +# install firefox +sudo apt update && sudo apt install firefox +``` + +When using firefox, you need to make sure that the version of geckodriver you are using works with your firefox version, see this [compatibility table](https://firefox-source-docs.mozilla.org/testing/geckodriver/Support.html). Get your installed firefox version by running: `firefox --version`. + +On Linux, you can download the geckodriver version that corresponds to your firefox version [here](https://github.com/mozilla/geckodriver/releases). + +On Mac you can use: `brew install geckodriver`. + +## UI tests with Chrome + +To run using Chrome, you must first download the appropriate [ChromeDriver +executable](https://sites.google.com/a/chromium.org/chromedriver/downloads) +and place it somewhere in your OS system path. Mac users with Homebrew may accomplish this via `brew cask install chromedriver`. + +**Please note, you must have either Firefox or Chrome installed on your system to +run these tests. Consult the [Firefox WebDriver](https://developer.mozilla.org/en-US/docs/Web/WebDriver) +or [ChromeDriver](https://sites.google.com/a/chromium.org/chromedriver/home) +documentation to ensure your Selenium, driver, browser, and OS versions all match +and support each other.** + +## Before running: + +Run the bootstrap build task to configure JRuby and all required dependencies: + +```bash +$ cd .. +$ build/run bootstrap +``` + +Note: all example code assumes you are running from your ArchivesSpace project directory. + +## Running the tests: + +```bash +#Frontend tests +./build/run frontend:selenium # Firefox, headless +FIREFOX_OPTS= ./build/run frontend:selenium # Firefox, no-opts = heady + +SELENIUM_CHROME=true ./build/run frontend:selenium # Chrome, headless +SELENIUM_CHROME=true CHROME_OPTS= ./build/run frontend:selenium # Chrome, no-opts = heady + +#Public tests +./build/run public:test # Firefox, headless +FIREFOX_OPTS= ./build/run public:test # Firefox, no-opts = heady + +SELENIUM_CHROME=true ./build/run public:test # Chrome, headless +SELENIUM_CHROME=true CHROME_OPTS= ./build/run public:test # Chrome, no-opts = heady +``` + +Tests can be scoped to specific files or groups: + +```bash +./build/run .. -Dspec='path/to/spec/from/spec/directory' # single file +./build/run .. -Dexample='[description from it block]' # specific block + +#EXAMPLES +./build/run frontend:selenium -Dexample='Repository model' +FIREFOX_OPTS= ./build/run frontend:selenium -Dexample='Repository model'# Firefox, heady + +./build/run public:test -Dspec='features/accessibility_spec.rb' +SELENIUM_CHROME=true CHROME_OPTS= ./build/run public:test -Dspec='features/accessibility_spec.rb' # Chrome, heady +``` + +Test require a backend and a frontend service to be running. To ovoid the overhead of starting and stopping them while developing, you can run tests against a dev backend: + +```bash +# start mysql and solr containers: +docker-compose -f docker-compose-dev.yml up + +# start services: + supervisord -c supervisord/archivesspace.conf + +# run a spec using the started backend: +ASPACE_TEST_BACKEND_URL='http://localhost:4567' ./build/run frontend:test -Dpattern="./features/events_spec.rb" + +# run all examples that contain "can spawn" in their description: +./build/run frontend:test -Dpattern="./features/accessions_spec.rb" -Dexample="can spawn" +``` + +Note, however, that some tests are dependent on a sequence of ordered steps and may not always run cleanly in isolation. In this case, more than the example provided may be run, and/or unexpected fails may result. + +### Saved pages on spec failures + +When frontend specs fail, a screenshot and an html page is saved for each failed example under `frontend/tmp/capybara`. On the CI, a zip file will be available for each failed CI job run under Summary -> Artifacts. In order to load the assets (and not see plain html) when viewing the saved html pages, a dev server should be running locally on port 3000, see [Running a development version of ArchivesSpace](/development/dev). + +### Keeping the test database up to date + +When calling `./build/run frontend:test` to run frontend specs, the following steps happen before the actual specs run: + +- All tables of the test database are dropped: `./build/run db:nuke:test` +- `frontend/spec/fixtures/archivesspace-test.sql` is loaded to the test database: `./build/run db:load:test` +- Any not-yet-applied migrations are run: `./build/run db:migrate:test` + +#### Updating the test database dump + +If any migrations are being applied whenever you run one or all frontend specs, it means that the test database dump `frontend/spec/fixtures/archivesspace-test.sql` has stayed behind. A new test database dump can be created by running: + +```bash +./build/run db:nuke:test +./build/run db:load:test +./build/run db:migrate:test +./build/run db:dump:test +``` + +An updated `frontend/spec/fixtures/archivesspace-test.sql` will be created that can be committed and pushed to a Pull Request. diff --git a/src/content/docs/fr/development/vscode.md b/src/content/docs/fr/development/vscode.md new file mode 100644 index 0000000..729f336 --- /dev/null +++ b/src/content/docs/fr/development/vscode.md @@ -0,0 +1,70 @@ +--- +title: Using the VS Code editor +description: Instructions for using the VS Code editor with ArchiveSpace, including prerequisites and setup. +--- + +ArchivesSpace provides a [VS Code settings file](https://github.com/archivesspace/archivesspace/blob/master/.vscode/settings.json) that makes it easy for contributors using VS Code to follow the code style of the project and work with the end-to-end tests. Using this tool chain in your editor helps fix code format and lint errors _before_ committing files or running tests. In many cases such errors will be fixed automatically when the file being worked on is saved. Errors that can't be fixed automatically will be highlighted with squiggly lines. Hovering your cursor over these lines will display a description of the error to help reach a solution. + +## Prerequisites + +1. [Node.js](https://nodejs.org) +2. [Ruby](https://www.ruby-lang.org/) +3. [VS Code](https://code.visualstudio.com/) + +## Set up VS Code + +### Add system dependencies + +1. [ESLint](https://eslint.org/) +2. [Prettier](https://prettier.io/) +3. [Rubocop](https://rubocop.org/) +4. [Stylelint](https://stylelint.io/) + +#### Rubocop + +```bash +gem install rubocop +``` + +See https://docs.rubocop.org/rubocop/installation.html for further information, including using Bundler. + +#### ESLint, Prettier, Stylelint + +Run the following command from the ArchivesSpace root directory. + +```bash +npm install +``` + +See [package.json](https://github.com/archivesspace/archivesspace/blob/master/package.json) for further details on how these tools are used in ArchivesSpace. + +### Add VS Code extensions + +Add the following extensions via the VS Code command palette or the Extensions panel. (See this [documentation for installing and managing extensions](https://code.visualstudio.com/learn/get-started/extensions)). + +1. [ESLint](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) (dbaeumer.vscode-eslint) +2. [Prettier](https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode) (esbenp.prettier-vscode) +3. [Ruby Rubocop Revised](https://marketplace.visualstudio.com/items?itemName=LoranKloeze.ruby-rubocop-revived) (LoranKloeze.ruby-rubocop-revived) +4. [Stylelint](https://marketplace.visualstudio.com/items?itemName=stylelint.vscode-stylelint) (stylelint.vscode-stylelint) + +Optional — for enhancing work with the end-to-end tests: + +5. [Cucumber](https://marketplace.visualstudio.com/items?itemName=CucumberOpen.cucumber-official) (CucumberOpen.cucumber-official) — see [End-to-end test integration](#end-to-end-test-integration), especially step-definition navigation. + +It's important to note that since these extensions work in tandem with the [VS Code settings file](https://github.com/archivesspace/archivesspace/blob/master/.vscode/settings.json), these settings only impact your ArchivesSpace VS Code Workspace, not your global VS Code User settings. + +The extensions should now work out of the box at this point providing error messages and autocorrecting fixable errors on file save! + +### End-to-end test integration + +The ArchivesSpace repository includes optional VS Code workspace configuration that integrates the Cucumber end-to-end test suite with the editor. The files [`.vscode/example.tasks.json`](https://github.com/archivesspace/archivesspace/blob/master/.vscode/example.tasks.json) and [`.vscode/example.settings.json`](https://github.com/archivesspace/archivesspace/blob/master/.vscode/example.settings.json) are not enabled by default, so they do not override your personal editor configuration. + +**Enable the tasks** + +Copy the example tasks file to `.vscode/tasks.json`. This adds a task that runs the e2e test suite with the correct working directory, Ruby environment, and environment variables. Run it via **Terminal → Run Task… → Cucumber: Run e2e-test** (the same command as in the [e2e test documentation](/development/e2e_tests)). You may optionally supply a feature file path, `file.feature:line`. + +**Step-definition navigation** + +Integrate the contents of `example.settings.json` into your existing `.vscode/settings.json` (do not replace the existing file, but merge the Cucumber-related settings if you desire to use them so your current workspace settings are preserved). + +This configures the Cucumber extension for `e2e-tests/**/*.feature` and shared Ruby step definitions, enabling jump-to-definition, undefined-step detection, and discovery of shared steps. This simplifies contributing new end-to-end tests. diff --git a/src/content/docs/fr/index.mdx b/src/content/docs/fr/index.mdx new file mode 100644 index 0000000..3d6ec85 --- /dev/null +++ b/src/content/docs/fr/index.mdx @@ -0,0 +1,14 @@ +--- +title: ArchivesSpace Technical Documentation +description: Technical documentation for ArchivesSpace, the open source archives management tool. +tableOfContents: false +editUrl: false +issueUrl: false +lastUpdated: false +prev: false +next: false +--- + +import Homepage from '@components/HomePage.astro' + +<Homepage /> diff --git a/src/content/docs/fr/migrations/migrate_from_archivists_toolkit.md b/src/content/docs/fr/migrations/migrate_from_archivists_toolkit.md new file mode 100644 index 0000000..c45195b --- /dev/null +++ b/src/content/docs/fr/migrations/migrate_from_archivists_toolkit.md @@ -0,0 +1,126 @@ +--- +title: Migrating from Archivists' Toolkit +description: Guidelines are for migrating data from Archivists' Toolkit 2.0 Update 16 to all ArchivesSpace 2.1.x or 2.2.x releases using the migration tool provided by ArchivesSpace. +--- + +These guidelines are for migrating data from Archivists' Toolkit 2.0 Update 16 to all ArchivesSpace 2.1.x or 2.2.x releases using the migration tool provided by ArchivesSpace. Migrations of data from earlier versions of the Archivists' Toolkit (AT) or other versions of ArchivesSpace are not supported by these guidelines or migration tool. + +> Note: A migration from Archivists' Toolkit to ArchivesSpace should not be run against an active production database. + +## Preparing for migration + +- Make a copy of the AT instance, including the database, to be migrated and use it as the source of the migration. It is strongly recommended that you not use your AT production instance and database as the source of the migration for the simple reason of protecting the production version from any anomalies that might occur during the migration process. +- Review your source database for the quality of the data. Look for invalid records, duplicate name and subject records, and duplicate controlled values. Irregular data will either be carried forward to the ArchivesSpace instance or, in some cases, block the migration process. +- Select a representative sample of accession, resource, and digital object records to be examined closely when the migration is completed. Make sure to represent in the sample both the simplest and most complicated or extensive records in the overall data collection. + +### Notes + +- An AT subject record will be set to type 'topical' if it does not have a valid AT type statement or its type is not one of the types in ArchivesSpace. Several other AT LookupList values are not present in ArchivesSpace. These LookupList values cannot be added during the AT migration process and will therefore need to be changed in AT prior to migration. For full details on enum (controlled value list) mappings see the data map. You can use the AT Lookup List tool to change values that will not map correctly, as specified by the data map. +- Record audit information (created by, date created, modified by, and date modified) will not migrate from AT to ArchivesSpace. ArchivesSpace will assign new audit data to each record as it is imported into ArchivesSpace. The exception to this is that the username of the user who creates an accession record will be migrated to the accession general note field. +- Implement an ArchivesSpace production version including the setting up of a MySQL database to migrate into. Instructions are included at [Getting Started with ArchivesSpace](/administration/getting_started) and [Running ArchivesSpace against MySQL](/provisioning/mysql). + +## Preparing for Migrating AT Data + +- The migration process is iterative in nature. A migration report is generated at the end of each migration routine. The report indicates errors or issues occurring with the migration. (An example of an AT migration report is provided at the end of this document.) You should use this report to determine if any problems observed in the migration results are best remedied in the source data or in the migrated data in the ArchivesSpace instance. If you address the problems in the source data, then you can simply conduct the migration again. +- However, once you accept the migration and address problems in the migrated data, you cannot migrate the source data again without establishing a new target ArchivesSpace instance. Migrating data to a previously migrated ArchivesSpace database may result in a great many duplicate record error messages and may cause unrecoverable damage to the ArchivesSpace database. +- Please note, data migration can be a very memory and time intensive task due to the large number of records being transferred. As such, we recommend running the AT migration on a computer with at least 2GB of available memory. +- Make sure your ArchivesSpace MySQL database is setup correctly, following the documentation in the ArchivesSpace README file. When creating a MySQL database, you MUST set the default character encoding for the database to be UTF8. This is particularly important if you use a MySQL client, such as Navicat, MySQL Workbench, phpMyAdmin, etc., to create the database. See [Running ArchivesSpace against MySQL](/provisioning/mysql) for more details. +- Increase the maximum Java heap space if you are experiencing time out events. To do so: + - Stop the current ArchivesSpace instance + - Open in a text editor the file "archivesspace.sh" (Linux / Mac OSX) or archivesspace.bat (Windows). The file is located in the ArchivesSpace installation directory. + - Find the text string "-Xmx512m" and change it to "-Xmx1024m". + - Save the file. + - Restart the ArchivesSpace instance. + - Restart the AT migration process. + +## Running the Migration Tool as an AT Plugin + +- Make sure that the AT instance you want to migrate from is shut down. Next, download the "scriptAT.zip" file from the at-migration release github page (https://github.com/archivesspace/at-migration/releases) and copy the file into the plugins folder of the AT instance, overwriting the one that's already there if needed. +- Make sure the ArchivesSpace instance that you are migrating into is up and running. +- Restart the AT instance to load the newly installed plug-in. To run the plug-in go to the "Tools" menu, then select "Script Runtime v1.0", and finally "ArchivesSpace Data Migrator". This will cause the plug-in window to display. + +![AT migrator](../../../../images/at_migrator.jpg) + +- Change the default information in the Migrator UI: + - **Threads** – Used to specify the number of clients that are used to copy Resource records simultaneously. The limit on the number of clients depends on the record size and allocated memory. A number from 4 to 6 is generally a good value to use, but can be reduced if an "Out of Memory Exception" occurs. + - **Host** – The URL and port number of the ArchivesSpace backend server + - **"Copy records when done" checkbox** – Used to specify that the records should + be copied once the repository check has completed. + - **Password** – password for the ArchivesSpace "admin" account. The default value + of "admin" should work unless it was changed by the ArchivesSpace + administrator. + - **Reset Password** – Each user account transferred has its password reset to this. + Please note that users need to change their password when they first log-in + unless LDAP is used for authentication. + - **"Specify Type of Extent Data" Radio button** – If you are using the BYU Plugin, + select that option. Otherwise, leave as the default – Normal or Harvard Plugin. + - **Specify Unlinked Records to NOT Copy checkboxes** – If you have name or + subject records that are not linked to accessions, resources, or digital objects, + you can choose not to migrate those to ArchivesSpace. + - **"Records to Publish?" checkboxes** – Used to specify what types of records + should be published after they are migrated to ArchivesSpace. + - **Text box showing -refid_unique, -term_default** – This is needed for the + functioning of the migration tool. Please do not make changes to this area. + - **Output Console** – Display section for following the migration while it is running + - **View Error Log** – Used to view a printout of all the errors encountered during the + migration process. This can be used while the migration process is underway as well. +- Once you have made the appropriate changes to the UI, there are three buttons to choose from to start the migration process. + - **Copy to ArchivesSpace** – This starts the migration to the ArchivesSpace instance + you have made the appropriate changes to the UI, there are three buttons to + indicated by the Host URL. + - **Run Repository Check** – The repository check searches for, and attempts to fix repository misalignment between Resources and linked Accession/Digital Object records. The fix applied entails copying the linked accession/digital object record to the repository of the resource record in the ArchivesSpace database (those record positions are not modified in the AT database). + + As long as accession records are not linked to multiple Resource records in different repositories, the fix will be valid. Otherwise, you will receive a warning message. For such cases, the Resource and Accession record(s) will still be migrated, but without links to one another; those links will need to be re-established in ArchivesSpace. + + This misalignment problem involves only accession and resource records and not digital object records, as accession and resource records have a many-to-many relationship. Assessments also can have a many-to-many relationship with resources, accessions, and digital objects. However, since assessments are small and quick to copy, they will simply be copied to as many repositories as needed to establish all the appropriate links. + + If the "Copy Records When Done" checkbox is selected, the records will be migrated to the ArchivesSpace instance once the check is completed. + + - **Continue Previous Migration** – If the migration process fails, this is used to skip to the place the failed previous migration left off. This should allow the migration process of resource records to be gracefully restarted without having to clean out the ArchivesSpace backend database and start from scratch. + +- For most part, the data migration process should be automatic, with an error log being generated when completed. However, depending on the particular data, various errors may occur that would require the migration to be re-run after they have been resolved by the user. The time a migration takes to complete will depend on a number of factors (database size, network performance etc.), but can be anywhere from a couple of hours to a few days. +- Data from the following AT modules will migrate: + - Lookup Lists + - Repositories + - Locations + - Users + - Subjects + - Names + - Accessions + - Digital Object and Digital Object Components + - Resources and Resource Components + - Assessments +- Data + - Reports from the following AT modules will not migrate + > INFORMATION MISSING FROM SOURCE DOCUMENT - NEEDS REVIEW!!! + +## Assessing the Migration and Cleaning Up Data + +Use the migration report to assess the fidelity of the migration and to determine whether to: + +- Fix data in the source AT instance and conduct the migration again, or +- Fix data in the target ArchivesSpace instance. + +If you select to fix the data in AT and conduct the migration again, you will need to delete all the content in the ArchivesSpace instance. + +If you accept the migration in the ArchivesSpace instance, the following outlines how to check and fix your data. + +- Re-establish user passwords. While user records will migrate, the passwords associated with them will not. You will need to re-assign those passwords according to the policies or conventions of your repositories. +- Review closely the set of sample records you selected: + - Accessions + - Resources + - Digital objects +- Review the following groups of records, making sure the correct number of records migrated: + - Accessions + - Assessments + - Resources + - Digital objects + - Controlled vocabulary lists + - Subjects + - Agents (Name records in AT) + - Locations + - Collection Management Classifications + - There may be a few extra agent records due to ArchivesSpace defaults or extra assessments if they were linked to records from more than one repository. +- In conducting the reviews, look for duplicate or incomplete records, broken links, or truncated data. +- Take special care to check to make sure your container data and locations are correct. The model for this is significantly different between AT and ArchivesSpace (where locations are tied to a container rather than directly to a resource or accession), so this presents some challenges for migration. +- Merge enumeration values as necessary. For instance, if you had both 'local' and 'local sources' as a source for names, it might be a good idea to merge these values. diff --git a/src/content/docs/fr/migrations/migrate_from_archon.md b/src/content/docs/fr/migrations/migrate_from_archon.md new file mode 100644 index 0000000..f0402fb --- /dev/null +++ b/src/content/docs/fr/migrations/migrate_from_archon.md @@ -0,0 +1,180 @@ +--- +title: Migrating from Archon +description: Guidelines are for migrating data from Archon 3.21-rev3 to all ArchivesSpace 2.2.2 using the migration tool provided by ArchivesSpace. +--- + +These guidelines are for migrating data from Archon 3.21-rev3 to all ArchivesSpace 2.2.2 using the migration tool provided by ArchivesSpace. Migrations of data from earlier versions of the Archon or other versions of ArchivesSpace are not supported by these guidelines or migration tool. + +> Note: A migration from Archon to ArchivesSpace should not be run against an active production database. + +## Preparing for migration + +Select a representative sample of accession, classification, collection, collection content, and digital object records to be examined closely when the migration is completed. Make sure to include both simple and more complicated or extensive records in the sample. + +Review your Archon database for data quality + +### Accession Records + +- Supply an accession date for all records, when possible. If an accession date is not + recorded in Archon, the date of 01/01/9999 will be supplied during the migration process. If you wish to change this default value, you may do so by editing the following file in the new Archon distribution, prior to running the migration: + `packages/core/templates/default/accession-list.inc.php` +- Supply an identifier for all records, when possible. If an identifier is not recorded in Archon, a supplied identifier will be constructed during the migration process, consisting of the date and the truncated accession title. + +### Classification Records + +Ensure that there are no duplicate classification titles at the same level in the classification hierarchy. If the migration tool encounters a duplicate value, some of the save operations for classifications will fail, and you will need to redo the migration. + +### Collection Records + +If normalized dates are not recorded correctly (i.e. if the end date and begin date are reversed), they will not be migrated or may cause the migration to fail. To check for such entries, a system administrator can run the follow query against the database: + +`SELECT ID, Title, NormalDateBegin, NormalDateEnd FROM tblCollections_Collections WHERE NormalDateBegin > NormalDateEnd;` + +### Level/Container Manager + +Review the settings to make sure that each 'level container' is appropriately marked with the correct values for "Intellectual Level" and "Physical Container" and that EAD Values are correctly recorded. + +![Level Container Manager](../../../../images/archon_level.jpg) + +Failure to code level container values correctly may result in incorrect nesting of resource components in ArchivesSpace. While the following information does not need to be acted upon prior to migration, please note the following if you find that content is not nested correctly after you migrate: + +- Collection content records that have a level container that is 'Intellectual Only' will be migrated to ArchivesSpace as resource components. Each level/container that has 'intellectual level' checked should have a valid value recorded in the "EAD Level" field (i.e. class, collection, file, fonds, item, otherlevel, recordgrp, series, subfonds, subgrp, subseries). These values are case sensitive, and all other values will be migrated as "otherlevel" on the collection content/resource component records to which they apply. +- Collection content records that have a level container that is 'Physical Only' will be migrated to ArchivesSpace as instance records of the type 'text' attached to a container in ArchivesSpace. These instance/container records will be attached to the intellectual level or levels that are immediate children of the container record as it was previously expressed in Archon. If the instance/container has no children it will be attached to its parent intellectual level instead. For illustrative purposes, the following screenshots show a container record prior to and following migration. + ![Archon container example](../../../../images/archon_container.jpg) +- Collection content records that have both physical and intellectual levels will be migrated as both resource components and instances. In this case the instance will be attached to the resource component. +- Collection content records that are neither physical nor intellectual levels will be migrated as if they were 'Intellectual Only'. This is not recommended and should be fixed prior to migration. + +### Collection Content Records + +- If a value has not been set in the "Title" or "Inclusive Dates" field of an "intellectual" level/container in Archon, the collection content record being migrated will be supplied a title, based on its "label" value and the "level/container" type set in Archon. + ![Collection Content Records](../../../../images/archon_collection.jpg) +- Optionally, if a migration fails, check for collection content records that reference invalid 'level/containers'. These records are found in the database tables, but are not visible to staff or end users and must be eliminated prior to migration. If not eliminated, the migration will fail. In order to identify these records, you should follow these steps. **Be very careful. If you are uncertain what you are doing, backup the database first or speak with a systems administrator!** +- In MySQL or SQL Server, open the table titled 'tblCollections_LevelContainers'. Note the 'ID' value recorded of each row (i.e. LevelContainer). +- Run a query against tblCollections_Content to find records where the LevelID column references an invalid value. For example, if tblCollections_Level Containers holds 'ID' values1-6 and 8-22: + `SELECT * FROM tblCollections_Content WHERE LevelContainerID > 22 OR (LevelContainerID > 6 AND LevelContainerID < 8);` + This will provide a list of all records with invalid 'LevelID' (i.e. where a record with the primary key referenced by a foreign key cannot be found). Review this list carefully to make sure you are comfortable deleting the records, or change the LevelID to a valid integer if you wish to retain the records. If you choose to delete the records, you will need to do so directly in the database (see below.) If you choose to do the latter, you may need to take additional steps directly in the database to link these records to a valid parent content record or collection; additional instructions can be supplied upon request. +- Run a query to delete the invalid records from the collections content table. For example: + `DELETE FROM tblCollections_Content WHERE LevelContainerID > 22 OR (LevelContainerID > 6 AND LevelContainerID < 8);` +- Optionally, if the migration fails, check for 'duplicate' collection content records. 'Duplicate' records are those that occupy the same node in the collection/content hiearchy. To check for these records, run the following query in mysql or sql server. + `SELECT ParentID, SortOrder, COUNT (*) FROM tblCollections_Content GROUP BY ParentID, SortOrder HAVING COUNT(*) > 1;` +- The query above checks for records that occupy the same branch and same position in the content hierarchy. If you discover such records, the sort order value of one of the records must be changed, so that both records occupy a unique position. In order to do this, run a query that finds all records attached to the parent record, then run an update query to change the sort order of one of the offending records so that each has a unique sort order. For example if the query above returns ParentID as a 'duplicate' value, you would run query one with the appropriate ParentID value to identify the offending records, and query two to fix the problem: + **Query one:** + + `SELECT ID, ParentID, SortOrder, Title FROM tblCollections_Content WHERE ParentID=8619;` + + | ID | ParentID | SortOrder | Title | + | ---- | -------- | --------- | ----------- | + | 8620 | 8619 | 1 | to mother | + | 8621 | 8619 | 1 | from mother | + | 8622 | 8619 | 3 | to father | + | 6823 | 8619 | 4 | from father | + + **Query two:** + + `UPDATE tblCollections_Content SET SortOrder=2 WHERE ID=8621;` + +## Preparing for Migrating Archon Data + +The migration process is iterative in nature. You should plan to do several test migrations, culminating in a final migration. Typically, migration will require assistance from a system administrator. + +The migration tool will connect to your Archon installation, read data from defined 'endpoints', and place the information in a target ArchivesSpace instance. + +A migration report is generated at the end of each migration routine and can be downloaded from the application. The report indicates errors or issues occurring with the migration. Sample data from migration report is provided in [Appendix A](#Appendix-A%3A-Migration-Log-Review). + +You should use this report to determine if any problems observed in the migration results are best remedied in the source data or in the migrated data in the ArchivesSpace instance. If you address the problems in the source data, then you can simply clear the database and conduct the migration again. However, once you accept the migration and make changes to the migrated data in ArchivesSpace, you cannot migrate the source data again without either overwriting the previous migration or establishing a new target ArchivesSpace instance. + +Please note, data migration can be a very memory and time intensive task due to the large amounts of records being transferred. As such, we recommend running the Archon migration tool on a server with at least 2GB of available memory. Test migrations have run from under an hour to twelve hours or more in the case of complex and large instances of Archon. + +Before starting the migration process, make sure that your current Archon installation is up to date: i.e. that you are using version 3.21 rev3. If you are on an earlier version of Archon, make a copy of the Archon instance, including the database, to be migrated and use it as the source of the migration. It is strongly recommended that you not use your Archon production instance and database as the source of the migration for the simple reason of protecting the production version from any anomalies that might occur during the migration process. Upgrade the copy of the Archon instance to version 3.21 rev3 prior to starting the migration process. + +### Get Archon to ArchivesSpace Migration Tool + +Download the latest JAR file release from https://github.com/archivesspace-deprecated/ArchonMigrator/releases/latest. This is an executable JAR file – double click to run it. + +### Install ArchivesSpace Instance + +Implement an ArchivesSpace production version including the setting up of a MySQL database to migrate into. Instructions are included at [Getting Started with ArchivesSpace](/administration/getting_started) and [Running ArchivesSpace against MySQL](/provisioning/mysql) + +### Prepare to Launch Migration + +> **Important Note:** The migration process should be launched from a networked computer with a stable (i.e. wired) connection, and you should turn power save settings off on the client computer you use to launch the migration. So that the migration can proceed in an undisturbed fashion, you should not try to access the ArchivesSpace or Archon front end or public interface until after the migration as completed. **If you fail to follow these instructions, the migration tool may not provide useful feedback and it will be difficult to determine how successful the migration was.** + +For the most part, the data migration process should be automatic, with errors being provided as the tool migrates and a log being made available when migration is complete. Depending on the particular data being migrated, various errors may occur These may require the migration to be re-run after they have been resolved by the user. When this occurs, the MySQL database should be emptied by the system administrator, and the migration rerun after steps are taken to resolve the problem that caused the error. + +The time that the migration takes to complete will depend on a number of factors (database size, network performance etc.), but has been known to take anywhere from a half hour to ten or twelve hours. Most of this time will probably be spent migrating collection records. + +The following Archon datatypes will migrate, and all relationships that exist between these datatypes should be preserved in ArchivesSpace, except as noted in bold below. For each datatype, post- migration cleanup recommendations are provided in parentheses: + +- Editable controlled value lists: + - Subject sources (review post migration and merge values with ArchivesSpace defaults or functionally duplicate values, when possible) + - Creatorsources(reviewpostmigrationandmergevalueswithArchivesSpacedefaults + or functionally duplicate values, when possible) + - Extentunits/types(mergefunctionallyduplicatevalues) o MaterialTypes + - ContainerTypes + - FileTypes + - ProcessingPriorities +- Repositories +- User/logins (users will need to reset password) +- Subjects (subjects of type personal corporate or family name are migrated as Agent + records, and are linked to resources and digital objects in the subject role. Review these + records and merge with duplicate agent names from creator migration, when possible.) +- Creators/Names +- Accessions (The migration tool will supply accession identifiers when these are blank in Archon. Review and change values, if appropriate.) +- Digital Objects: The migration tool will generate digital object metadata records in ArchivesSpace for each digital library record that is stored in your Archon instance. For each file that has an attached digital library record, the migration tool will generate a digital object component and file instance record. In addition, the migration tool will provide a folder containing the source file you uploaded to Archon when you created the record. In order to link these files to the digital file records in ArchivesSpace, you should place the files in a single directory on a webserver. + **To preserve the linkage between the file's metadata in ArchivesSpace, you must provide the base URL to the folder where the objects will be placed.** The migration tool prepends this URL to the filename to form a complete path to the object location, for each file being exported, as shown in the screenshot below. (In version 2.2.2 of ArchivesSpace, with the default digital object templates, these files will be available in the public interface by clicking a link.) +- Locations (Controlled location records are much more granular in ArchivesSpace than in Archon. You should have a location record for each unique combination of location drop down, range, section, and shelf in Archon, and these records should be linked to top container records which are in turn linked to an instance for each collection where they apply.) +- Resources and Resource Components (see locations, above). + Data from the following Archon modules will not migrate to ArchivesSpace +- Books (Book data could be migrated later if a plugin is developed to support this data). +- AVSAP/Assessments + +## Launch Migration Process + +Make sure the ArchivesSpace instance that you are migrating into is up and running, then open up the migration tool. + +![Archon migrator](../../../../images/archon_migrator.jpg) + +1. Change the default information in the migration tool user interface: + - ArchonSource – Supply the base URL for the Archon instance. + - Archon User – Username for an account with full administrator privileges. + - Password – Password for that same account. + - Download Digital Object Files checkbox – Check if you want to move any attached digital object files and supply a webpath to a web accessible folder where you intend to place the digital objects after the migration is complete. + - Set Download Folder – Clicking this will open a file explorer that will allow you to specify the folder to which you want digital files from Archon to be downloaded. + - Set Default Repository checkbox -- Select "Set Default Repository" checkbox to set which Repository Accession and Unlinked digital objects are copied to. The default is "Based on Linked Collection," which will copy Accession records to the same repository of any Collection records they are linked to, or the first repository if they are not. You can also select a specific repository from the drop-down list. + - Host – The URL and port number of the ArchivesSpace backend server. + - ASpace admin – User name for the ArchivesSpace "admin" account. The default value of "admin" should work unless it was changed by the ArchivesSpace administrator. + - Password – Password for the ArchivesSpace "admin" account. The default value of "admin" should work unless it was changed by the ArchivesSpace administrator. + - Reset Password – Each user account transferred has its password reset to this. Please note that users need to change their password when they first log-in unless LDAP is used for authentication. + - Migration Options – This is needed for the functioning of the migration tool. Please do not make changes to this area. + - Output Console – Display section for following the migration while it is running + - View Error Log – Used to view a printout of all the errors encountered during the migration process. This can be used while the migration process is underway as well. +2. Press the "Copy to ArchivesSpace" button to start the migration process. This starts the migration to the ArchivesSpace instance indicated by the Host URL. +3. If the migration process fails: Review the error message provided and /or the migration log. Fix any issues that have been identified, clear the target MySQL and try again. +4. When the process has completed: + - Download the migration report. + - Move digital objects into the folder location corresponding to the URL you provided to the migration tool. + +## Assessing the Migration and Cleaning Up Data + +1. Use the migration report to assess the fidelity of the migration and to determine whether to fix data in the source Archon instance and conduct the migration again, or fix data in the target ArchivesSpace instance. If you select to fix data in Archon, you will need to delete all the content in the ArchivesSpace instance, then rerun the migration after clearing the ArchivesSpace database. +2. Review the following record types, making sure the correct number of records migrated. In conducting the reviews, look for duplicate or incomplete records, broken links, or truncated data. + - Controlled vocabulary lists + - Classifications + - Accessions + - Resources + - Digital objects + - Subjects (not persons, families, and corporate bodies) + - Creators (known as Agents in ArchivesSpace) + - Locations +3. Review closely the set of sample records you selected, comparing data in Archon to data in ArchivesSpace. +4. If you accept the migration in the ArchivesSpace instance, then proceed to re-establish user passwords. While user records will migrate, the passwords associated with them will not. You will need to reassign those passwords according to the policies or conventions of your repositories. + +## Appendix A: Migration Log Review + +The migration log provides a description of any irregularities that take place during a migration and should be saved in a secure location, for future reference. The log contains both save errors and warnings. The warnings should be reviewed after the migration for information, for potential action. + +Most warnings will not require a follow up action. For example, they may note that a supplied value has been provided to meet an ArchivesSpace data model requirement. This occurs for all collections with empty identifiers. Occasionally, warnings will indicate that there was a problem establishing a link between two records for a reason such as a resource component not being found. Warnings like this should be cause for review since they may indicate that some data was lost. + +Save errors will note that a particular piece of data could not be migrated because it is not supported in the ArchivesSpace data model or for some other reason. In these cases, you should review the record in Archon and in ArchivesSpace if it was migrated at all. Oftentimes, these occur due to duplicate records (such as if you have a matching creator and person subject). If a save error occurs due to a duplicate record, this is usually okay but should still be reviewed to make sure there was no data loss. If a save error occurs for any other reason, this typically means the migration will need to be rerun (unless the record it occurred on is not needed or is easier just to migrate manually). + +Typically, the migration log will record the Archon internal IDs of the original Archon object being migrated whenever a save error or warning occurs. This simplifies finding and correcting relevant records. diff --git a/src/content/docs/fr/migrations/migration_tools.md b/src/content/docs/fr/migrations/migration_tools.md new file mode 100644 index 0000000..523f0e4 --- /dev/null +++ b/src/content/docs/fr/migrations/migration_tools.md @@ -0,0 +1,59 @@ +--- +title: Migration tools +description: Links to tools for migrating data into and out of ArchivesSpace. +--- + +## Archivists' Toolkit + +- [AT migration tool instructions](/migrations/migrate_from_archivists_toolkit) +- [AT migration plugin](https://github.com/archivesspace/at-migration/releases) +- [AT migration source code](https://github.com/archivesspace/at-migration) +- [AT migration mapping (for 2.x versions of the tool and ArchivesSpace](https://github.com/archivesspace/at-migration/blob/master/docs/ATMappingDocument.xlsx) + +### Older information + +- [AT migration guidelines (for migrations using the original migration tool through version 1.4.2; only supports migrations to version 1.4.2 or lower of ArchivesSpace)](http://archivesspace.org/wp-content/uploads/2016/08/ATMigrationGuidelines-REV-20140417.pdf) +- [AT migration mapping (for migrations through version 1.4.2 or lower of the tool and ArchivesSpace)](http://archivesspace.org/wp-content/uploads/2016/08/ATMappingDocument_AT-ASPACE_BETA.xls) + +## Archon + +- [Archon migration tool instructions](/migrations/migrate_from_archon) +- [Archon migration tool](https://github.com/archivesspace/archon-migration/releases/latest) +- [Archon migration source code](https://github.com/archivesspace/archon-migration/) +- [Archon migration mapping (for 2.x versions of the tool and ArchivesSpace)](https://docs.google.com/spreadsheets/d/13soN5djk16QYmRoSajtyAc_nBrNldyL58ViahKFJAog/edit?usp=sharing) + +### Older information + +- [refactored Archon migration plugin](https://github.com/archivesspace-deprecated/ArchonMigrator/releases) +- [information about refactoring project](https://archivesspace.atlassian.net/browse/AR-1278) +- [previous Archon migration plugin](https://github.com/archivesspace/archon-migration/releases) +- [Plugin read me text](https://github.com/archivesspace-deprecated/ArchonMigrator/blob/master/README.md) +- [Archon migration guidelines](http://archivesspace.org/wp-content/uploads/2016/05/Archon_Migration_Guidelines-7_13_2017.docx) +- [Archon migration mapping](http://archivesspace.org/wp-content/uploads/2016/08/ArchonSchemaMappingsPublic.xlsx) + +## Data Import and Export Maps + +- [Accession CSV Map](http://archivesspace.org/wp-content/uploads/2016/05/Accession-CSV-mapping-2013-08-05.xlsx) +- [Accession CSV Template](https://github.com/archivesspace/archivesspace/tree/master/templates) +- [Archival Objects from Excel or CSV with Load Via Spreadsheet](https://github.com/archivesspace/archivesspace/tree/master/templates) +- [Assessment CSV Template](https://github.com/archivesspace/archivesspace/tree/master/templates) +- [Digital Object CSV Map](http://archivesspace.org/wp-content/uploads/2016/08/DigitalObject-CSV-mapping-2013-02-26.xlsx) +- [Digital Object CSV Template](https://github.com/archivesspace/archivesspace/tree/master/templates) +- [Digital Objects Export Maps](http://archivesspace.org/wp-content/uploads/2016/08/ASpace-Dig-Object-Exports.xlsx) +- [EAD Import / Export Map](https://archivesspace.org/wp-content/uploads/2021/06/EAD-Import-Export-Mapping-20171030.xlsx) +- [Location Record CSV Template](https://github.com/archivesspace/archivesspace/tree/master/templates) +- (newly reviewed) [MARCXML Import Map](https://archivesspace.org/wp-content/uploads/2021/06/AS-MARC-import-mappings-2021-06-15.xlsx) +- [MARCXML Export Map](https://archivesspace.org/wp-content/uploads/2021/06/MARCXML-Export-Mapping-20130715.xlsx) +- [MARCXML Authority Import / Export Map](https://archivesspace.org/wp-content/uploads/2021/05/Agents-ASpace-to-MARCXMLMay2021.xlsx) +- [EAC-CPF Import / Export Map](https://archivesspace.org/wp-content/uploads/2021/05/Agents-ASpace-to-EAC-CPFMay2021.xlsx) + +(newly reviewed) MARCXML Import Map +MARCXML Export Map + +### OAI-PMH-only maps + +Most ArchivesSpace OAI-PMH responses are based on the export maps above, but there are a few that are only available through OAI-PMH + +[MODS for resources and resource components](https://archivesspace.org/wp-content/uploads/2019/06/MODS-OAI-Export-Mapping-20190610.xlsx) +[Dublin Core for resources and resource components](https://archivesspace.org/wp-content/uploads/2019/06/DC-OAI-Export-Mapping-20190610.xlsx) +[DCMI Metadata Terms for resources and resource components](https://archivesspace.org/wp-content/uploads/2019/06/DCTerms-OAI-Export-Mapping-20190611.xlsx) diff --git a/src/content/docs/fr/provisioning/clustering.md b/src/content/docs/fr/provisioning/clustering.md new file mode 100644 index 0000000..db73b24 --- /dev/null +++ b/src/content/docs/fr/provisioning/clustering.md @@ -0,0 +1,370 @@ +--- +title: Load balancing and multiple tenants +description: Guidelines for running ArchivesSpace in a clustered environment for load-balancing purposes, and for supporting multiple tenants. +--- + +This document describes two aspects of running ArchivesSpace in a +clustered environment: for load-balancing purposes, and for supporting +multiple tenants (isolated installations of the system in a common +deployment environment). + +The configuration described in this document is one possible approach, +but it is not intended to be prescriptive: the application layer of +ArchivesSpace is stateless, so any mechanism you prefer for load +balancing across web applications should work just as well as the one +described here. + +Unless otherwise stated, it is assumed that you have root access on +your machines, and all commands are to be run as root (or with sudo). + +## Architecture overview + +This document assumes an architecture with the following components: + +- A load balancer machine running the Nginx web server +- Two application servers, each running a full ArchivesSpace + application stack +- A MySQL server +- A shared NFS volume mounted under `/aspace` on each machine + +## Overview of files + +The `files` directory in this repository (in the same directory as this +`README.md`) contains what will become the contents of the `/aspace` +directory, shared by all servers. It has the following layout: + + /aspace + ├── archivesspace + │   ├── config + │   │   ├── config.rb + │   │   └── tenant.rb + │   ├── software + │   └── tenants + │   └── \_template + │   └── archivesspace + │   ├── config + │   │   ├── config.rb + │   │   └── instance_hostname.rb.example + │   └── init_tenant.sh + └── nginx + └── conf + ├── common + │   └── server.conf + └── tenants + └── \_template.conf.example + +The highlights: + +- `/aspace/archivesspace/config/config.rb` -- A global configuration file for all ArchivesSpace instances. Any configuration options added to this file will be applied to all tenants on all machines. +- `/aspace/archivesspace/software/` -- This directory will hold the master copies of the `archivesspace.zip` distribution. Each tenant will reference one of the versions of the ArchivesSpace software in this directory. +- `/aspace/archivesspace/tenants/` -- Each tenant will have a sub-directory under here, based on the `_template` directory provided. This holds the configuration files for each tenant. +- `/aspace/archivesspace/tenants/[tenant name]/config/config.rb` -- The global configuration file for [tenant name]. This contains tenant-specific options that should apply to all of the tenant's ArchivesSpace instances (such as their database connection settings). +- `/aspace/archivesspace/tenants/[tenant name]/config/instance_[hostname].rb` -- The configuration file for a tenant's ArchivesSpace instance running on a particular machine. This allows configuration options to be set on a per-machine basis (for example, setting different ports for different application servers) +- `/aspace/nginx/conf/common/server.conf` -- Global Nginx configuration settings (applying to all tenants) +- `/aspace/nginx/conf/tenants/[tenant name].conf` -- A tenant-specific Nginx configuration file. Used to set the URLs of each tenant's ArchivesSpace instances. + +## Getting started + +We'll assume you already have the following ready to go: + +- Three newly installed machines, each running RedHat (or CentOS) + Linux (we'll refer to these as `loadbalancer`, `apps1` and + `apps2`). +- A MySQL server. +- An NFS volume that has been mounted as `/aspace` on each machine. + All machines should have full read/write access to this area. +- An area under `/aspace.local` which will store instance-specific + files (such as log files and Solr indexes). Ideally this is just + a directory on local disk. +- Java 1.6 (or above) installed on each machine. + +### Populate your /aspace/ directory + +Start by copying the directory structure from `files/` into your +`/aspace` volume. This will contain all of the configuration files +shared between servers: + +```shell +mkdir /var/tmp/aspace/ +cd /var/tmp/aspace/ +unzip -x /path/to/archivesspace.zip +cp -av archivesspace/clustering/files/* /aspace/ +``` + +You can do this on any machine that has access to the shared +`/aspace/` volume. + +### Install the cluster init script + +On your application servers (`apps1` and `apps2`) you will need to +install the supplied init script: + +```shell +cp -a /aspace/aspace-cluster.init /etc/init.d/aspace-cluster +chkconfig --add aspace-cluster +``` + +This will start all configured instances when the system boots up, and +can also be used to start/stop individual instances. + +### Install and configure Nginx + +You will need to install Nginx on your `loadbalancer` machine, which +you can do by following the directions at +http://nginx.org/en/download.html. Using the pre-built packages for +your platform is fine. At the time of writing, the process for CentOS +is simply: + +```shell +wget http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm +rpm -i nginx-release-centos-6-0.el6.ngx.noarch.rpm +yum install nginx +``` + +Nginx will place its configuration files under `/etc/nginx/`. For +now, the only change we need to make is to configure Nginx to load our +tenants' configuration files. To do this, edit +`/etc/nginx/conf.d/default.conf` and add the line: + +``` +include /aspace/nginx/conf/tenants/\*.conf; +``` + +_Note:_ the location of Nginx's main config file might vary between +systems. Another likely candidate is `/etc/nginx/nginx.conf`. + +### Download the ArchivesSpace distribution + +Rather than having every tenant maintain their own copy of the +ArchivesSpace software, we put a shared copy under +`/aspace/archivesspace/software/` and have each tenant instance refer +to that copy. To set this up, run the following commands on any one +of the servers: + +```shell +cd /aspace/archivesspace/software/ +unzip -x /path/to/downloaded/archivesspace-x.y.z.zip +mv archivesspace archivesspace-x.y.z +ln -s archivesspace-x.y.z stable +``` + +Note that we unpack the distribution into a directory containing its +version number, and then assign that version the symbolic name +"stable". This gives us a convenient way of referring to particular +versions of the software, and we'll use this later on when setting up +our tenant. + +We'll be using MySQL, which means we must make the MySQL connector +library available. To do this, place it in the `lib/` directory of +the ArchivesSpace package: + +```shell +cd /aspace/archivesspace/software/stable/lib +wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.24/mysql-connector-java-5.1.24.jar +``` + +## Defining a new tenant + +With our server setup out of the way, we're ready to define our first +tenant. As shown in _Overview of files_ above, each tenant has their +own directory under `/aspace/archivesspace/tenants/` that holds all of +their configuration files. In defining our new tenant, we will: + +- Create a Unix account for the tenant +- Create a database for the tenant +- Create a new set of ArchivesSpace configuration files for the + tenant +- Set up the database + +Our newly defined tenant won't initially have any ArchivesSpace +instances, but we'll set those up afterwards. + +To complete the remainder of this process, there are a few bits of +information you will need. In particular, you will need to know: + +- The identifier you will use for the tenant you will be creating. + In this example we use `exampletenant`. +- Which port numbers you will use for the application's backend, + Solr instance, staff and public interfaces. These must be free on + your application servers. +- If running each tenant under a separate Unix account, the UID and + GID you'll use for them (which must be free on each of your + servers). +- The public-facing URLs for the new tenant. We'll use + `staff.example.com` for the staff interface, and `public.example.com` + for the public interface. + +### Creating a Unix account + +Although not strictly required, for security and ease of system +monitoring it's a good idea to have each tenant instance running under +a dedicated Unix account. + +We will call our new tenant `exampletenant`, so let's create a user +and group for them now. You will need to run these commands on _both_ +application servers (`apps1` and `apps2`): + +```shell +groupadd --gid 2000 exampletenant +useradd --uid 2000 --gid 2000 exampletenant +``` + +Note that we specify a UID and GID explicitly to ensure they match +across machines. + +### Creating the database + +ArchivesSpace assumes that each tenant will have their own MySQL +database. You can create this from the MySQL shell: + +```sql +create database exampletenant default character set utf8; +grant all on exampletenant.* to 'example'@'%' identified by 'example123'; +``` + +In this example, we have a MySQL database called `exampletenant`, and +we grant full access to the user `example` with password `example123`. +Assuming our database server is `db.example.com`, this corresponds to +the database URL: + +``` +jdbc:mysql://db.example.com:3306/exampletenant?user=example&password=example123&useUnicode=true&characterEncoding=UTF-8 +``` + +We'll make use of this URL in the following section. + +### Creating the tenant configuration + +Each tenant has their own set of files under the +`/aspace/archivesspace/tenants/` directory. We'll define our new +tenant (called `exampletenant`) by copying the template set of +configurations and running the `init_tenant.sh` script to set them +up. We can do this on either `apps1` or `apps2`--it only needs to be +done once: + +```shell +cd /aspace/archivesspace/tenants +cp -a \_template exampletenant +``` + +Note that we've named the tenant `exampletenant` to match the Unix +account it will run as. Later on, the startup script will use this +fact to run each instance as the correct user. + +For now, we'll just edit the configuration file for this tenant, under +`exampletenant/archivesspace/config/config.rb`. When you open this file you'll see two +placeholders that need filling in: one for your database URL, which in +our case is just: + +``` +jdbc:mysql://db.example.com:3306/exampletenant?user=example&password=example123&useUnicode=true&characterEncoding=UTF-8 +``` + +and the other for this tenant's search, staff and public user secrets, +which should be random, hard to guess passwords. + +## Adding the tenant instances + +To add our tenant instances, we just need to initialize them on each +of our servers. On `apps1` _and_ `apps2`, we run: + +```shell +cd /aspace/archivesspace/tenants/exampletenant/archivesspace +./init_tenant.sh stable +``` + +If you list the directory now, you will see that the `init_tenant.sh` +script has created a number of symlinks. Most of these refer back to +the `stable` version of the ArchivesSpace software we unpacked +previously, and some contain references to the `data` and `logs` +directories stored under `/aspace.local`. + +Each server has its own configuration file that tells the +ArchivesSpace application which ports to listen on. To set this up, +make two copies of the example configuration by running the following +command on `apps1` then `apps2`: + +```shell +cd /aspace/archivesspace/tenants/exampletenant/archivesspace +cp config/instance_hostname.rb.example config/instance_`hostname`.rb +``` + +Then edit each file to set the URLs that the instance will use. +Here's our `config/instance_apps1.example.com.rb`: + +```ruby +{ + :backend_url => "http://apps1.example.com:8089", + :frontend_url => "http://apps1.example.com:8080", + :solr_url => "http://apps1.example.com:8090", + :indexer_url => "http://apps1.example.com:8091", + :public_url => "http://apps1.example.com:8081", +} +``` + +Note that the filename is important here: it must be: + +``` +instance_[server hostname].rb +``` + +These URLs will determine which ports the application listens on when +it starts up, and are also used by the ArchivesSpace indexing system +to track updates across the cluster. + +### Starting up + +As a one-off, we need to populate this tenant's database with the +default set of tables. You can do this by running the +`setup-database.sh` script on either `apps1` or `apps2`: + +```shell +cd /aspace/archivesspace/tenants/exampletenant/archivesspace +scripts/setup-database.sh +``` + +With the two instances configured, you can now use the init script to +start them up on each server: + +```shell +/etc/init.d/aspace-cluster start-tenant exampletenant +``` + +and you can monitor each instance's log file under +`/aspace.local/tenants/exampletenant/logs/`. Once they're started, +you should be able to connect to each instance with your web browser +at the configured URLs. + +## Configuring the load balancer + +Our final step is configuring Nginx to accept requests for our staff +and public interfaces and forward them to the appropriate application +instance. Working on the `loadbalancer` machine, we create a new +configuration file for our tenant: + +```shell +cd /aspace/nginx/conf/tenants +cp -a \_template.conf.example exampletenant.conf +``` + +Now open `/aspace/nginx/conf/tenants/exampletenant.conf` in an +editor. You will need to: + +- Replace `<tenantname>` with `exampletenant` where it appears. +- Change the `server` URLs to match the hostnames and ports you + configured each instance with. +- Insert the tenant's hostnames for each `server_name` entry. In + our case these are `public.example.com` for the public interface, and + `staff.example.com` for the staff interface. + +Once you've saved your configuration, you can test it with: + + /usr/sbin/nginx -t + +If Nginx reports that all is well, reload the configurations with: + + /usr/sbin/nginx -s reload + +And, finally, browse to `http://public.example.com/` to verify that Nginx +is now accepting requests and forwarding them to your app servers. +We're done! diff --git a/src/content/docs/fr/provisioning/domains.md b/src/content/docs/fr/provisioning/domains.md new file mode 100644 index 0000000..9fa0d3e --- /dev/null +++ b/src/content/docs/fr/provisioning/domains.md @@ -0,0 +1,85 @@ +--- +title: Serving over subdomains +description: How to configure ArchivesSpace and your web server to serve the application over subdomains. +--- + +This document describes how to configure ArchivesSpace and your web server to serve the application over subdomains (e.g., `http://staff.myarchive.org/` and `http://public.myarchive.org/`), which is the recommended +practice. Separate documentation is available if you wish to [serve ArchivesSpace under a prefix](/provisioning/prefix) (e.g., `http://aspace.myarchive.org/staff` and +`http://aspace.myarchive.org/public`). + +1. [Configuring Your Firewall](#Step-1%3A-Configuring-Your-Firewall) +2. [Configuring Your Web Server](#Step-2%3A-Configuring-Your-Web-Server) + - [Apache](#Apache) + - [Nginx](#Nginx) +3. [Configuring ArchivesSpace](#Step-3%3A-Configuring-ArchivesSpace) + +## Step 1: Configuring Your Firewall + +Since using subdomains negates the need for users to access the application directly on ports 8080 and 8081, these should be locked down to access by localhost only. On a Linux server, this can be done using iptables: + +```shell +iptables -A INPUT -p tcp -s localhost --dport 8080 -j ACCEPT +iptables -A INPUT -p tcp --dport 8080 -j DROP +iptables -A INPUT -p tcp -s localhost --dport 8081 -j ACCEPT +iptables -A INPUT -p tcp --dport 8081 -j DROP +``` + +## Step 2: Configuring Your Web Server + +### Apache + +The [mod_proxy module](https://httpd.apache.org/docs/2.4/mod/mod_proxy.html) is necessary for Apache to route public web traffic to ArchivesSpace's ports as designated in your config.rb file (ports 8080 and 8081 by default). + +This can be set up as a reverse proxy in the Apache configuration like so: + +```apache +<VirtualHost *:80> +ServerName public.myarchive.org +ProxyPass / http://localhost:8081/ +ProxyPassReverse / http://localhost:8081/ +</VirtualHost> + +<VirtualHost *:80> +ServerName staff.myarchive.org +ProxyPass / http://localhost:8080/ +ProxyPassReverse / http://localhost:8080/ +</VirtualHost> +``` + +The purpose of ProxyPass is to route _incoming_ traffic on the public URL (public.myarchive.org) to port 8081 of your server, where ArchivesSpace's public interface sits. The purpose of ProxyPassReverse is to intercept _outgoing_ traffic and rewrite the header to match the URL that the browser is expecting to see (public.myarchive.org). + +### nginx + +Using nginx as a reverse proxy needs a configuration file like so: + +```nginx +server { +listen 80; +listen [::]:80; +server_name staff.myarchive.org; +location / { + proxy_pass http://localhost:8080/; + } +} + server { +listen 80; +listen [::]:80; +server_name public.myarchive.org; +location / { + proxy_pass http://localhost:8081/; + } +} +``` + +## Step 3: Configuring ArchivesSpace + +The only configuration within ArchivesSpace that needs to occur is adding your domain names to the following lines in config.rb: + +```ruby +AppConfig[:frontend_proxy_url] = 'http://staff.myarchive.org' +AppConfig[:public_proxy_url] = 'http://public.myarchive.org' +``` + +This configuration allows the staff edit links to appear on the public site to users logged in to the staff interface. + +Do **not** change `AppConfig[:public_url]` or `AppConfig[:frontend_url]`; these must retain their port numbers in order for the application to run. diff --git a/src/content/docs/fr/provisioning/https.md b/src/content/docs/fr/provisioning/https.md new file mode 100644 index 0000000..b02732c --- /dev/null +++ b/src/content/docs/fr/provisioning/https.md @@ -0,0 +1,163 @@ +--- +title: Serving over HTTPS +description: Installing ArchivesSpace in such a manner that all end-user requests are served over HTTPS. +--- + +This document describes the approach for those wishing to install +ArchivesSpace in such a manner that all end-user requests (i.e., URLs) +are served over HTTPS rather than HTTP. For the purposes of this documentation, the URLs for the staff and public interfaces will be: + +- `https://staff.myarchive.org` - staff interface +- `https://public.myarchive.org` - public interface + +The configuration described in this document is one possible approach, +and to keep things simple the following are assumed: + +- ArchivesSpace is running on a single Linux server +- The server is running Apache or Nginx +- You have obtained an SSL certificate and key from an authority +- You have ensured that appropriate firewall ports have been opened (80 and 443). + +1. [Configuring the Web Server](<#Step-1%3A-Configure-Web-Server-(Apache-or-Nginx)>) + - [Apache](#Apache) + - [Setting up SSL](#Setting-up-SSL) + - [Setting up Redirects](#Setting-up-Redirects) + - [Nginx](#Nginx) +2. [Configuring ArchivesSpace](#Step-2%3A-Configure-ArchivesSpace) + +## Step 1: Configure Web Server (Apache or Nginx) + +### Apache + +Information about configuring Apache for SSL can be found at http://httpd.apache.org/docs/current/ssl/ssl_howto.html. You should read +that documentation before attempting to configure SSL. + +#### Setting up SSL + +Use the `NameVirtualHost` and `VirtualHost` directives to proxy +requests to the actual application urls. This requires the use of the `mod_proxy` module in Apache. + +```apache +NameVirtualHost *:443 + +<VirtualHost *:443> + ServerName staff.myarchive.org + SSLEngine On + SSLCertificateFile "/path/to/your/cert.crt" + SSLCertificateKeyFile "/path/to/your/key.key" + RequestHeader set X-Forwarded-Proto "https" + ProxyPreserveHost On + ProxyPass / http://localhost:8080/ + ProxyPassReverse / http://localhost:8080/ +</VirtualHost> + +<VirtualHost *:443> + ServerName public.myarchive.org + SSLEngine On + SSLCertificateFile "/path/to/your/cert.crt" + SSLCertificateKeyFile "/path/to/your/key.key" + RequestHeader set X-Forwarded-Proto "https" + ProxyPreserveHost On + ProxyPass / http://localhost:8081/ + ProxyPassReverse / http://localhost:8081/ +</VirtualHost> +``` + +You may optionally set the `Set-Cookie: Secure attribute` by adding `Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure`. When a cookie has the Secure attribute, the user agent will include the cookie in an HTTP request only if the request is transmitted over a secure channel. + +Users may encounter a warning in the browser's console stating `Cookie “archivesspace_session” does not have a proper “SameSite” attribute value. Soon, cookies without the “SameSite” attribute or with an invalid value will be treated as “Lax”. This means that the cookie will no longer be sent in third-party contexts` (example from Firefox 104) or something similar. Some browsers (for example, Chrome version 80 or above) already enforce this. Standard ArchivesSpace installations should be unaffected, but if you encounter problems with integrations and/or customizations of your particular installation, the following directive may be required: `Header edit Set-Cookie ^(.*)$ $1;SameSite=None;Secure`. Alternatively, it may be the case that `SameSite=Lax` (the default) or even `SameSite=Strict` are more appropriate depending on your functional and/or security requirements. Please refer to https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie/SameSite or other resources for more information. + +#### Setting up Redirects + +When running a site over HTTPS, it's a good idea to set up a redirect to ensure any outdated HTTP requests are routed to the correct URL. This can be done through Apache as follows: + +```apache +<VirtualHost *:80> +ServerName staff.myarchive.org +RewriteEngine On +RewriteCond %{HTTPS} off +RewriteRule (.*) https://staff.myarchive.org$1 [R,L] +</VirtualHost> + +<VirtualHost *:80> +ServerName public.myarchive.org +RewriteEngine On +RewriteCond %{HTTPS} off +RewriteRule (.*) https://public.myarchive.org$1 [R,L] +</VirtualHost> +``` + +### Nginx + +Information about configuring nginx for SSL can be found at http://nginx.org/en/docs/http/configuring_https_servers.html You should read +that documentation before attempting to configure SSL. + +```nginx + +server { + listen 80; + listen [::]:80; + server_name staff.myarchive.org; + return 301 https://staff.myarchive.org; +} + + +server { + listen 443 ssl; + server_name staff.myarchive.org; + charset utf-8; + } + + ssl_certificate /path/to/your/fullchain.pem; + ssl_certificate_key /path/to/your/key.pem + + location / { + allow 0.0.0.0/0; + deny all; + proxy_pass http://localhost:8081; + } +} + +server { + listen 80; + listen [::]:80; + server_name public.myarchive.org; + return 301 https://public.myarchive.org; +} + + +server { + listen 443 ssl; + server_name staff.myarchive.org; + charset utf-8; + } + + ssl_certificate /path/to/your/fullchain.pem; + ssl_certificate_key /path/to/your/key.pem + + location / { + allow 0.0.0.0/0; + deny all; + proxy_pass http://localhost:8080; + } +} + +``` + +## Step 2: Configure ArchivesSpace + +The following lines need to be altered in the config.rb file: + +```ruby +AppConfig[:frontend_proxy_url] = "https://staff.myarchive.org" +AppConfig[:public_proxy_url] = "https://public.myarchive.org" +``` + +These lines don't need to be altered and should remain with their default values. E.g.: + +```ruby +AppConfig[:frontend_url] = "http://localhost:8080" +AppConfig[:public_url] = "http://localhost:8081" +AppConfig[:frontend_proxy_prefix] = proc { "#{URI(AppConfig[:frontend_proxy_url]).path}/".gsub(%r{/+$}, "/") } +AppConfig[:public_proxy_prefix] = proc { "#{URI(AppConfig[:public_proxy_url]).path}/".gsub(%r{/+$}, "/") } +``` diff --git a/src/content/docs/fr/provisioning/index.md b/src/content/docs/fr/provisioning/index.md new file mode 100644 index 0000000..95ea9e7 --- /dev/null +++ b/src/content/docs/fr/provisioning/index.md @@ -0,0 +1,15 @@ +--- +title: Provisioning and server configuration +description: The index to the provisioning section of the ArchivesSpace techinal documentation. +--- + +- [Running ArchivesSpace with load balancing and multiple tenants](./clustering.html) +- [Serving ArchivesSpace over subdomains](./domains.html) +- [Serving ArchivesSpace user-facing applications over HTTPS](./https.html) +- [JMeter Test Group Template](./jmeter.html) +- [Running ArchivesSpace against MySQL](./mysql.html) +- [Application monitoring with New Relic](./newrelic.html) +- [Running ArchivesSpace under a prefix](./prefix.html) +- [robots.txt](./robots.html) +- [Running ArchivesSpace with external Solr](./solr.html) +- [Tuning ArchivesSpace](./tuning.html) diff --git a/src/content/docs/fr/provisioning/jmeter.md b/src/content/docs/fr/provisioning/jmeter.md new file mode 100644 index 0000000..0373a4d --- /dev/null +++ b/src/content/docs/fr/provisioning/jmeter.md @@ -0,0 +1,13 @@ +--- +title: JMeter Test Group Template +description: How to create a Jmeter Test Group. +--- + +## Creating a test group: + +Load the file 'example_test_plan.jmx' into JMeter and make sure the following are true for the example to run successfully: + +- The backend is running on localhost port 4567 +- There is at least one repository, and its url is /repositories/2 + +The example will log in to the backend, store the session key as a JMeter variable, and make two basic requests, one of which will require a session key. diff --git a/src/content/docs/fr/provisioning/mysql.md b/src/content/docs/fr/provisioning/mysql.md new file mode 100644 index 0000000..8ba110a --- /dev/null +++ b/src/content/docs/fr/provisioning/mysql.md @@ -0,0 +1,89 @@ +--- +title: Using MySQL +description: Instructions for how to set up MySQL with ArchivesSpace. +--- + +Out of the box, the ArchivesSpace distribution runs against an +embedded database, but this is only suitable for demonstration +purposes. When you are ready to starting using ArchivesSpace with +real users and data, you should switch to using MySQL. MySQL offers +significantly better performance when multiple people are using the +system, and will ensure that your data is kept safe. + +ArchivesSpace is currently able to run on MySQL version 5.x & 8.x. + +## Download MySQL Connector + +ArchivesSpace requires the +[MySQL Connector for Java](http://dev.mysql.com/downloads/connector/j/), +which must be downloaded separately because of its licensing agreement. +Download the Connector and place it in a location where ArchivesSpace can +find it on its classpath: + +```shell +$ cd lib +$ curl -Oq https://repo1.maven.org/maven2/com/mysql/mysql-connector-j/9.1.0/mysql-connector-j-9.1.0.jar +``` + +Note that the version of the MySQL connector may be different by the +time you read this. + +## Set up your MySQL database + +Next, create an empty database in MySQL and grant access to a dedicated +ArchivesSpace user. The following example uses username `as` +and password `as123`. + +**NOTE: WHEN CREATING THE DATABASE, YOU MUST SET THE DEFAULT CHARACTER +ENCODING FOR THE DATABASE TO BE `utf8`.** This is particularly important +if you use a MySQL client to create the database (e.g. Navicat, MySQL +Workbench, phpMyAdmin, etc.). + +<!-- This is also true of MySQL 8 in general... --> + +**NOTE: If using AWS RDS MySQL databases, binary logging is not enabled by default and updates will fail.** To enable binary logging, you must create a custom db parameter group for the database and set the `log_bin_trust_function_creators = 1`. See [Working with DB Parameter Groups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html) for information about RDS parameter groups. Within a MySQL session you can also try `SET GLOBAL log_bin_trust_function_creators = 1;` + +```shell +$ mysql -uroot -p + +mysql> create database archivesspace default character set utf8mb4; +Query OK, 1 row affected (0.08 sec) +``` + +If using MySQL 5.7 and below: + +```sql +mysql> grant all on archivesspace.* to 'as'@'localhost' identified by 'as123'; +Query OK, 0 rows affected (0.21 sec) +``` + +If using MySQL 8+: + +```sql +mysql> create user 'as'@'localhost' identified by 'as123'; +Query OK, 0 rows affected (0.08 sec) + +mysql> grant all privileges on archivesspace.* to 'as'@'localhost'; +Query OK, 0 rows affected (0.21 sec) +``` + +Then, modify your `config/config.rb` file to refer to your MySQL +database. When you modify your configuration file, **MAKE SURE THAT YOU +SPECIFY THAT THE CHARACTER ENCODING FOR THE DATABASE TO BE `UTF-8`** as shown +below: + +```ruby +AppConfig[:db_url] = "jdbc:mysql://localhost:3306/archivesspace?user=as&password=as123&useUnicode=true&characterEncoding=UTF-8" +``` + +There is a database setup script that will create all the tables that +ArchivesSpace requires. Run this with: + +```shell +scripts/setup-database.sh # or setup-database.bat under Windows +``` + +You can now follow the instructions in the "Getting Started" section to start +your ArchivesSpace application. + +\*\*NOTE: For MySQL 8. MySQL 8 uses a new method (caching_sha2_password) as the default authentication plugin instead of the old mysql_native_password that MySQL 5.7 and older used. This may require starting a MySQL 8 server with the `--default-authentication-plugin=mysql_native_password` option. You may also be able to change the auth mechanism on a per user basis by logging into mysql and running `ALTER USER 'as'@'localhost' IDENTIFIED WITH mysql_native_password BY 'as123';`. Also be sure to have the LATEST [MySQL Connector for Java](http://dev.mysql.com/downloads/connector/j/) from MySQL in your /lib/ directory for ArchivesSpace. diff --git a/src/content/docs/fr/provisioning/newrelic.md b/src/content/docs/fr/provisioning/newrelic.md new file mode 100644 index 0000000..49ff283 --- /dev/null +++ b/src/content/docs/fr/provisioning/newrelic.md @@ -0,0 +1,40 @@ +--- +title: Application monitoring with New Relic +description: Instructions for how to set up New Relic for application monitoring on ArchivesSpace. +--- + +[New Relic](http://newrelic.com/) is an application performance monitoring tool (amongst other things). + +**To use with ArchivesSpace you must:** + +- Signup for an account at newrelic (there is a free tier and paid plans) +- Edit config.rb to: + - activate the `newrelic` plugin + - add the New Relic license key + - add an application name to identify the ArchivesSpace instance in the New Relic dashboard + +For example, in config.rb: + +```ruby +## You may have other plugins +AppConfig[:plugins] = ['local', 'newrelic'] + +AppConfig[:newrelic_key] = "enteryourkeyhere" +AppConfig[:newrelic_app_name] = "ArchivesSpace" +``` + +- Install the New Relic agent library by initializing the plugin: + +```shell +## For Linux/OSX +$ scripts/initialize-plugin.sh newrelic + +## For Windows +% scripts\initialize-plugin.bat newrelic +``` + +- Start, or restart ArchivesSpace to pick up the configuration. + +Within a few minutes the application should be visible in the New Relic dashboard with data being collected. + +--- diff --git a/src/content/docs/fr/provisioning/prefix.md b/src/content/docs/fr/provisioning/prefix.md new file mode 100644 index 0000000..d0ddc38 --- /dev/null +++ b/src/content/docs/fr/provisioning/prefix.md @@ -0,0 +1,64 @@ +--- +title: Proxy prefix +description: Instructions for serving each user-facing ArchivesSpace application under a prefix rather than as its own subdomain. +--- + +**Important Note: Prefixes do NOT work properly in versions between 2.0.1 and 2.2.2** + +This document describes a simple approach for those wishing to deviate from the recommended +practice of running each user-facing ArchivesSpace application on its own subdomain, and instead +serve each application under a prefix, e.g. + +``` +http://aspace.myarchive.org/staff +http://aspace.myarchive.org/public +``` + +This configuration described in this document is one possible approach, +and to keep things simple the following are assumed: + +- ArchivesSpace is running on a single Linux server +- The server is running the Apache 2.2+ webserver + +Unless otherwise stated, it is assumed that you have root access on +your machines, and all commands are to be run as root (or with sudo). + +## Step 1: Setup proxies in your Apache configuration + +The following edits can be made in the httpd.conf file itself, or in an included file: + +```apache +ProxyPass /staff http://localhost:8080/staff +ProxyPassReverse /staff http://localhost:8080/ +ProxyPass /public http://localhost:8081/public +ProxyPassReverse /public http://localhost:8081/ +``` + +Now restart Apache. + +## Step 2: Install and configure ArchivesSpace + +Follow the instructions in the main README to download and install ArchivesSpace. + +Open the file `archivesspace/config/config.rb` and add the following lines: + +```ruby +AppConfig[:frontend_proxy_url] = 'http://aspace.myarchive.org/staff' +AppConfig[:public_proxy_url] = 'http://aspace.myarchive.org/public' +``` + +(Note: These lines should NOT begin with a '#' character.) + +Start ArchivesSpace. + +## Step 3: (Optional) Lock down ports 8080 and 8081 + +By default, the staff and public applications are accessible on ports 8080 and 8081 + +``` +http://aspace.myarchive.org:8080 +http://aspace.myarchive.org:8081 +``` + +Since these are not the URLs at which users should access the application, you will probably +want to close them off. See README_HTTPS for more information on closing ports using iptables. diff --git a/src/content/docs/fr/provisioning/robots.md b/src/content/docs/fr/provisioning/robots.md new file mode 100644 index 0000000..702522a --- /dev/null +++ b/src/content/docs/fr/provisioning/robots.md @@ -0,0 +1,45 @@ +--- +title: robots.txt +description: Instructions for adding a robots.txt to your ArchivesSpace site. +--- + +The easiest way to add a `robots.txt` to your site is simply create +one in your `/config/` directly. This file will be served as a standard +`robots.txt` file when you start your site. + +If you're not able to do that, you can use a seperate file and your proxy. + +For Apache the config would look like this: + +```apache +<Location "/robots.txt"> + SetHandler None + Require all granted +</Location> +Alias /robots.txt /var/www/robots.txt +``` + +nginx, more like this: + +```nginx +location /robots.txt { + alias /var/www/robots.txt; +} +``` + +You may also add robots meta-tags to your `layout_head.html.erb` to be included in the header area of your site. + +example: + +`<meta name="robots" content="noindex, nofollow">` + +A sensible starting point for a `robots.txt` file looks something like this: + +``` +Disallow: /search* +Disallow: /inventory/* +Disallow: /collection_organization/* +Disallow: /repositories/*/top_containers/* +Disallow: /check_session* +Disallow: /repositories/*/resources/*/tree/* +``` diff --git a/src/content/docs/fr/provisioning/solr.md b/src/content/docs/fr/provisioning/solr.md new file mode 100644 index 0000000..84845d0 --- /dev/null +++ b/src/content/docs/fr/provisioning/solr.md @@ -0,0 +1,205 @@ +--- +title: External Solr +description: Instructions for installing and using external Solr with ArchivesSpace. +--- + +:::note +For ArchivesSpace > 3.1.1, external Solr is **required**. For previous versions it is optional. +::: + +## Supported Solr Versions + +See the [Solr requirement notes](/administration/getting_started#solr) + +## Install Solr + +Refer to the [Solr documentation](https://solr.apache.org/guide/solr/latest/) for instructions on setting up Solr on your server. + +You will download the Solr package and extract it to a folder of your choosing. Do not start Solr +until you have added the ArchivesSpace configuration files. + +**We strongly recommend a standalone mode installation. No support will be provided for Solr +Cloud deployments specifically (i.e. we cannot help troubleshoot Zookeeper).** + +## Create a configset + +Before running Solr you will need to +setup a [configset](https://solr.apache.org/guide/8_10/config-sets.html#configsets-in-standalone-mode). + +### Create a new directory + +#### Linux + +Using the command line: + +```shell +mkdir -p /$path/$to/$solr/server/solr/configsets/archivesspace/conf +``` + +Be sure to replace `/$path/$to/$solr` with your actual Solr location, which might be something like: + +```shell +mkdir -p /opt/solr/server/solr/configsets/archivesspace/conf +``` + +#### Windows + +Right click on your Solr directory and open in Windows Terminal (Powershell). + +``` +mkdir -p .\server\solr\configsets\archivesspace\conf +``` + +You should see something like this in response: + +``` +Directory: C:\Users\archivesspace\Projects\solr-8.10.1\server\solr\configsets\archivesspace +Mode LastWriteTime Length Name +---- ------------- ------ ---- +d----- 10/25/2021 12:15 PM conf +``` + +### Copy the config files + +Copy the ArchivesSpace Solr configuration files from the `solr` directory included +in the zip file release into the `$SOLR_HOME/server/solr/configsets/archivesspace/conf` directory. + +There should be four files: + +- schema.xml +- solrconfig.xml +- stopwords.txt +- synonyms.txt + +```shell +ls .\server\solr\configsets\archivesspace\conf\ + +Directory: C:\Users\archivesspace\Projects\solr-8.10.1\server\solr\configsets\archivesspace\conf + +Mode LastWriteTime Length Name +---- ------------- ------ ---- +-a---- 10/25/2021 12:18 PM 18291 schema.xml +-a---- 10/25/2021 12:18 PM 3046 solrconfig.xml +-a---- 10/25/2021 12:18 PM 0 stopwords.txt +-a---- 10/25/2021 12:18 PM 0 synonyms.txt +``` + +_Note: your exact output may be slightly different._ + +## Setup the environment + +When using Solr v9 or later, the use of [Solr modules](https://solr.apache.org/guide/solr/latest/configuration-guide/solr-modules.html) is required. +We recommend using the environment variable option to specify the modules to use: + +```shell +SOLR_MODULES=analysis-extras +``` + +This environment variable needs to be available to the Solr instance at runtime. + +For instructions on how set an environment variable here are some recommended articles: + +- When using [linux](https://www.freecodecamp.org/news/how-to-set-an-environment-variable-in-linux) +- When using a [mac](https://phoenixnap.com/kb/set-environment-variable-mac) +- When using [windows](https://docs.oracle.com/cd/E83411_01/OREAD/creating-and-modifying-environment-variables-on-windows.htm#OREAD158). Note that on windows, the variable name should be: `SOLR_MODULES` and the variable value: `analysis-extras` + +## Setup a Solr core + +With the `configset` in place run the command to create an ArchivesSpace core: + +```bash +bin/solr start +``` + +Wait for Solr to start (running as a non-admin user): + +```shell +.\bin\solr start +"java version info is 11.0.12" +"Extracted major version is 11" +OpenJDK 64-Bit Server VM warning: JVM cannot use large page memory because it does not have enough privilege to lock pages in memory. +Waiting up to 30 to see Solr running on port 8983 +Started Solr server on port 8983. Happy searching! +``` + +You can check that Solr is running on [http://localhost:8983](http://localhost:8983). + +Now create the core: + +```shell +bin/solr create -c archivesspace -d archivesspace +``` + +You should see confirmation: + +```shell +"java version info is 11.0.12" +"Extracted major version is 11" + +Created new core 'archivesspace' +``` + +In the browser you should be able to access the [ArchivesSpace schema](http://localhost:8983/solr/#/archivesspace/files?file=schema.xml). + +## Disable the embedded server Solr instance (optional <= 3.1.1 only) + +Edit the ArchivesSpace config.rb file: + +```ruby +AppConfig[:enable_solr] = false +``` + +Note that doing this means that you will have to backup Solr manually. + +## Set the Solr url in your config.rb file + +This config setting should point to your Solr instance: + +```ruby +AppConfig[:solr_url] = "http://localhost:8983/solr/archivesspace" +``` + +If you are not running ArchivesSpace and Solr on the same server, update +`localhost` to your Solr address. + +By default, on startup, ArchivesSpace will check that the Solr configuration +appears to be correct and will raise an error if not. You can disable this check +by setting `AppConfig[:solr_verify_checksums] = false` in `config.rb`. + +Please note: if you're upgrading an existing installation of ArchivesSpace to use an external Solr, you will need to trigger a full re-index. +See [Indexes](/administration/indexes) for more details. + +--- + +You can now follow the instructions in the [Getting started](/administration/getting_started) section to start +your ArchivesSpace application. + +--- + +## Upgrading Solr + +If you are using an older version of Solr than is recommended you may need (if called out +in release notes) or want to upgrade. Before performing an upgrade it is recommended that you review: + +- [Solr upgrade notes](https://solr.apache.org/guide/solr/latest/upgrade-notes/solr-upgrade-notes.html) +- [ArchivesSpace's release notes](https://github.com/archivesspace/archivesspace/releases) + +You should also review this document as the installation steps may include +instructions that were not present in the past. For example, from Solr v9 there is a +requirement to use Solr modules with instructions to configure the modules using environment +variables. + +The crucial part will be ensuring that ArchivesSpace's schema is being used for the +ArchivesSpace Solr index. The config setting `AppConfig[:solr_verify_checksums] = true` +will perform a check on startup that confirms this is the case, otherwise ArchivesSpace +will not be able to start up. + +From ArchivesSpace 3.5+ `AppConfig[:solr_verify_checksums]` does not check the +`solrconfig.xml` file. Therefore you can make changes to it without ArchivesSpace failing +on startup. However, for an upgrade you will want to at least compare the ArchivesSpace +`solrconfig.xml` to the one that is in use in case there are changes that need to be made to +work with the upgraded-to version of Solr. For example the ArchivesSpace Solr v8 `solrconfig.xml` +will not work as is with Solr v9. + +After upgrading Solr you should trigger a full re-index. Instructions for this are in +[Indexes](/administration/indexes). diff --git a/src/content/docs/fr/provisioning/tuning.md b/src/content/docs/fr/provisioning/tuning.md new file mode 100644 index 0000000..b36f9f2 --- /dev/null +++ b/src/content/docs/fr/provisioning/tuning.md @@ -0,0 +1,51 @@ +--- +title: Performance tuning +description: Guidance for performance tuning of the ArchivesSpace stack. +--- + +ArchivesSpace is a stack of web applications which may require special tuning in order to run most effectively. This is especially the case for institutions with lots of data or many simultaneous users editing metadata. +Keep in mind that ArchivesSpace can be hosted on multiple server, either in a [multitenant setup](/provisioning/clustering) or by deploying the various applications ( i.e. backend, frontend, public, solr, & indexer ) on separate servers. + +## Application Settings + +The application itself can tuned in numerous ways. It’s a good idea to read the [configuration documentation](/customization/configuration), as there are numerous settings that can be adjusted to fit your needs. + +An important thing to note is that since ArchivesSpace is a Java application, it’s possible to set the memory allocations used by the JVM. There are numerous articles on the internet full of information about what the optimal settings are, which will depend greatly on the load your server is experiencing and the hardware. It’s a good idea to monitor the application and ensure that it’s not hitting the top limits what you’ve set as the heap. + +These settings are: + +- ASPACE_JAVA_XMX : Maximum heap space ( maps to Java’s Xmx, default "Xmx1024m" ) +- ASPACE_JAVA_XSS : Thread stack size ( maps to Xss, default "Xss2m" ) +- ASPACE_GC_OPTS : Options used by the Java garbage collector ( default : "-XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:NewRatio=1" ) + +To modify these settings, Linux users can either export an environment variable ( e.g. $ export ASPACE_JAVA_XMX="Xmx2048m" ) or edit the archivesspace.sh startup script and modify the defaults. + +Windows users must edit the archivesspace.bat file. + +If you're having trouble with errors like `java.lang.OutOfMemoryError` try doubling the `ASPACE_JAVA_XMX`. On Linux you can do this either by setting an environment variable like `$ export ASPACE_JAVA_XMX="Xmx2048m"` or by editing archivsspace.sh: + +```shell +if [ "$ASPACE_JAVA_XMX" = "" ]; then + ASPACE_JAVA_XMX="-Xmx2048m" +fi +``` + +For Windows, you'll change archivesspace.bat: + +```shell +java -Darchivesspace-daemon=yes %JAVA_OPTS% -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:NewRatio=1 -Xss2m -X +mx2048m -Dfile.encoding=UTF-8 -cp "%GEM_HOME%\gems\jruby-rack-1.1.12\lib\*;lib\*;launcher\lib\*!JRUBY!" org.jruby.Main "la +uncher/launcher.rb" > "logs/archivesspace.out" 2>&1 +``` + +**NOTE: THE APPLICATION WILL NOT USE THE AVAILABLE MEMORY UNLESS YOU SET THE MAXIMUM HEAP SIZE TO ALLOCATE IT** For example, if your server has 4 gigs of RAM, but you haven’t adjusted the ArchivesSpace settings, you’ll only be using 1 gig. + +## MySQL + +The ArchivesSpace application can hit a database server rather hard, since it’s a metadata rich application. There are many articles online about how to tune a MySQL database. A good place to start is try something like [MySQL Tuner](http://mysqltuner.com/) or [Tuning Primer](https://rtcamp.com/tutorials/mysql/tuning-primer/) which can give good hints on possible tweaks to make to your MySQL server configuration. + +Keep a close eye on the memory available to the server, as well as your InnoDB buffer pool. + +## Solr + +The internet is full of many suggestions on how to optimize a Solr index. [Running an external Solr index](/provisioning/solr) can be beneficial to the performance of ArchivesSpace, since that moves the index to its own server. diff --git a/src/content/docs/fr/release-notes/v4.0.0.md b/src/content/docs/fr/release-notes/v4.0.0.md new file mode 100644 index 0000000..3324b7b --- /dev/null +++ b/src/content/docs/fr/release-notes/v4.0.0.md @@ -0,0 +1,89 @@ +--- +title: v4.0.0 +--- + +## ArchivesSpace v4.0.0 Release Summary + +Major technical infrastructure upgrades and user interface improvements characterize this release. Key changes include: + +## Breaking Changes + +- **Breaking change**: [OAI identifiers now use colon separator between the namespace and identifier](#api-and-integration-updates) +- **Breaking change**: [Solr 9 now required](#major-infrastructure-updates) +- **Breaking change**: [the Sequence module has been removed from core ArchivesSpace](#plugins-and-configuration) + +## Major Infrastructure Updates + +- **Breaking change**: Solr 9 now required +- Upgraded to newer versions of: + - Bootstrap (4.3) + - jQuery (3.7.0) + - Rails (6.1.6) + - JRuby (9.3.x.x) + - Nokogiri (1.13.10) + - Sequel (5.9.0) +- Frontend and public development web server migrated from Jetty to Puma (6.4.2) +- Staff application CSS migrated from Less to Sass +- Java 8 no longer supported - requires Java 11 or 17 +- Docker now supported as recommended deployment method + +## Public User Interface Improvements + +- Collection organization sidebar can now be configured for left/right positioning in config.rb +- New information and options for large finding aids + - Displays percentage of loaded records in infinite scroll + - Option to load all children for a resource at once (vs infinite scroll) +- Search terms now highlighted in results +- Fixed bug causing extra lines in notes display +- Change PDF label from "Print" to "Download PDF" +- PDF uses Kurinto fonts by default +- Improved hyperlink display in classification descriptions + +## Staff Interface Enhancements + +- Bulk updater plugin now part of core application +- New ability to duplicate full resource or archival object records +- Enhanced spreadsheet importers + - Added new fields for digital objects to bulk Digital Object spreadsheet + - Location imports can include an owner repository + - Archival Object CSV imports now respect publication status + - New option to download partially completed digital object spreadsheet template +- Fixed agent merge preview page +- Improved staff plugins dropdown in repository settings +- Fixes to the Rapid Data Entry modal +- Fixed tooltip bugs +- Improved Jobs status layouts + +## EAD Export Changes + +- More fields have special character escaped +- Removed commas and period from langmaterial notes +- Leading XML tags in Revision Description will no longer cause invalid XML + +## Documentation and Testing + +- Launched new technical documentation site at docs.archivesspace.org +- Ported all Selenium tests to Capybara +- Added functionality for test failure screenshots + +## API and Integration Updates + +- **Breaking change**: OAI identifiers now use colon separator between the namespace and identifier + +## Security and Administration + +- New config.rb option to allow users with the Administrator role to access the system information page +- Added config.rb option for favicon display +- PUI PDFs will now include clearer error messages when generation fails +- Enhanced bulk import/update capabilities with new configuration options + +## Plugins and Configuration + +- **Breaking change**: the Sequence module has been removed from core ArchivesSpace + +## Community Contributions + +- 76 community contributions accepted +- 134 Pull Requests merged +- 146 Jira Tickets closed +- Contributions from multiple community members and organizations diff --git a/src/content/docs/ja/404.md b/src/content/docs/ja/404.md new file mode 100644 index 0000000..976d1cc --- /dev/null +++ b/src/content/docs/ja/404.md @@ -0,0 +1,9 @@ +--- +title: '404' +editUrl: false +lastUpdated: false +tableOfContents: false +hero: + title: '404' + tagline: Page not found. Check the URL or try searching for what you were looking for. +--- diff --git a/src/content/docs/ja/about/authoring.md b/src/content/docs/ja/about/authoring.md new file mode 100644 index 0000000..3b2b1c8 --- /dev/null +++ b/src/content/docs/ja/about/authoring.md @@ -0,0 +1,308 @@ +--- +title: Authoring content +description: This page outlines best practices for updating and writing markdown files for the tech-docs repository. +--- + +The Tech Docs site contains two types of content--documentation pages and blog posts. Both content types are written in [Markdown](https://en.wikipedia.org/wiki/Markdown) and define page-specific details as [yaml](https://yaml.org/) key:value pairs. + +Tech Docs uses [GitHub-flavored Markdown](https://github.github.com/gfm/), a variant of Markdown syntax, and [SmartyPants](https://daringfireball.net/projects/smartypants/), a typographic punctuation plugin. These tools provide authors niceties like generating clickable links from text, creating lists and tables, formatting for quotations and em-dashes, and more. + +## Where pages go + +### Documentation pages + +Documentation pages live under `src/content/docs/`. Each page is a `.md` or `.mdx` file. The URL path is `/` plus the file path relative to that directory, without the extension—for example, `src/content/docs/architecture/public.md` is served at `/architecture/public`. Nested folders add segments to the path. + +### Blog + +Blog posts live under `src/content/blog/` as `.md` or `.mdx` files. The URL is `/blog/` plus the path to the file relative to that folder, without the extension—for example, `src/content/blog/v4-2-0-release-candidate.md` is served at `/blog/v4-2-0-release-candidate`. Nested folders add path segments to the URL. + +Valid frontmatter and body content are required for the site to be built and published. + +## Markdown + +Common use of Markdown throughout Tech Docs includes: + +- [headings](#headings) +- [links](#links) +- [emphasizing text](#emphasizing-text) +- [paragraphs](#paragraphs) +- [lists](#lists) +- [code examples](#code-examples) +- [diagrams](#diagrams) +- [asides](#asides) +- [images](#images) + +### Headings + +Start a new line with between 2 and 6 `#` symbols, followed by a single space, and then the heading text. + +```md +## Example second-level heading +``` + +The number of `#` symbols corresponds to the heading level in the document hierarchy. **The first heading level is reserved for the page title** (available in the page [YAML frontmatter](#yaml-frontmatter)). Therefore the first _authored_ heading on every page should be a second level heading (`##`). + +:::note[Second level heading requirement] +Authored headings should start at the second level (`##`) on every page, since the first level (`#`) is reserved for the page title which is machine-generated. +::: + +```md +<!-- example.md --> + +## Second level heading + +Notice the page starts with a second level heading. + +Notice the blank lines above and below each heading. + +### Third level heading + +This is demo text under the Third level heading section. + +#### Fourth level heading + +##### Fifth level heading + +###### Sixth and final level heading +``` + +### Links + +Create a link by wrapping the link text in brackets (`[ ]`) immediately followed by the external link URL, or internal link path, wrapped in parentheses (`( )`). + +```md +[text](URL or path) +``` + +Be sure not to include any space between the wrapped text and URL. + +```md +<!-- example.md --> + +See the [TechDocs source code](https://github.com/archivesspace/tech-docs). +``` + +#### In documentation pages + +##### To other pages + +When linking to another Tech Docs documentation page, start with a forward slash (`/`), followed by the location of the page as found in the `src/content/docs/` directory, and omit the file extension (`.md`). + +```md +✅ [Public user interface](/architecture/public) + +❌ [Public user interface](architecture/public) +❌ [Public user interface](./architecture/public) +❌ [Public user interface](../architecture/public) +❌ [Public user interface](/architecture/public.md) +``` + +:::note[Internal link requirements] +Links to other Tech Docs documentation pages should: + +1. start with a forward slash (`/`) +2. reflect the location of the page as found in `src/content/docs/` +3. not include the file extension (`.md`) + +::: + +##### Within a page + +Starlight provides [automatic heading anchor links](https://starlight.astro.build/guides/authoring-content/#automatic-heading-anchor-links). To link to a section within a page, use the `#` symbol followed by the HTML `id` of the relevant section heading. + +```md +<!-- src/content/docs/about/authoring.md --> + +See the [Links](#links) section on this page. + +See the [Public configuration options](/architecture/public#configuration). +``` + +:::tip +A section heading's `id` is usually the same text string as the heading itself, but in all lowercase letters and with all single spaces converted to single hyphens. See the actual HTML `id` by right clicking on the heading to "inspect" it. +::: + +#### In blog posts + +When you write the body of a blog post, links to documentation pages use the same pattern as [in documentation pages](#to-other-pages): a leading `/` and the path under `src/content/docs/` without `.md`, for example `[Public user interface](/architecture/public)`. + +Links to another blog post use `/blog/` plus that post’s path under `src/content/blog/` without the extension—the same shape as its public URL (see [Blog](#blog) under [Where pages go](#where-pages-go)). For example, `src/content/blog/v4-2-0-release-candidate.md` is linked as `[v4.2.0 release candidate](/blog/v4-2-0-release-candidate)`. Nested folders add segments, for example `/blog/releases/v4-2-0` for `src/content/blog/releases/v4-2-0.md`. + +### Emphasizing text + +Wrap text to be emphasized with `_ ` for italics, `**` for bold, and `~~` for strikethrough. + +```md +<!-- example.md --> + +_Italicized_ text + +**Bold** text + +**_Bold and italicized_** text + +~~Strikethrough~~ text +``` + +### Paragraphs + +Create paragraphs by leaving a blank line between lines of text. + +```md +<!-- example.md --> + +This is one paragraph. + +This is another paragraph. +``` + +### Lists + +Precede each line in a list with a dash (`-`) for a bulleted list, or a number followed by a period (`1.`) for an ordered list. + +```md +<!-- example.md --> + +- Resource +- Digital Object +- Accession + +1. Accession +2. Digital Object +3. Resource +``` + +### Code examples + +Wrap inline code with a single backtick (`` ` ``). + +Wrap code blocks with triple backticks (` ``` `), also known as a "code fence", placed just above and below the code. Append the name of the code's language or its file extension to the first set of backticks for syntax highlighting. + +````md +<!-- example.md --> + +The `JSONModel` class is central to ArchivesSpace. + +```ruby +def h(str) + ERB::Util.html_escape(str) +end +``` +```` + +### Diagrams + +Tech Docs supports [Mermaid](https://mermaid.js.org/) diagrams in both documentation pages and blog posts. + +Use a fenced code block with `mermaid` as the language: + +````md +```mermaid +flowchart TD + A[Staff user edits record] --> B[Indexer updates Solr] + B --> C[Updated record in PUI] +``` +```` + +Rendered example: + +```mermaid +flowchart TD + A[Staff user edits record] --> B[Indexer updates Solr] + B --> C[Updated record in PUI] +``` + +### Asides + +Asides are useful for highlighting secondary or marketing information. + +Wrap content in a pair of triple colons (`:::`) and append one of the aside types (ie: `note`) to the first set of colons. The aside types are `note`, `tip`, `caution`, and `danger`, each of which have their own set of colors and icon. Customize the title by wrapping text in brackets (`[ ]`) placed after the note type. + +```md +<!-- example.md --> + +:::tip +Become an ArchivesSpace member today! 🎉 +::: + +:::note[Some custom title] + +### Markdown is supported in asides + +![Pic alt text](../../../../images/example.jpg) + +Lorem ipsum dolor sit amet consectetur, adipisicing elit. +::: +``` + +:::note +Asides are a custom Markdown feature provided by the underlying [Starlight framework](https://starlight.astro.build/guides/authoring-content/#asides) that builds the Tech Docs. +::: + +:::tip[Customize the aside title] +Customize the the aside title by wrapping text in brackets (`[ ]`) after the note type, in this case `tip`. +::: + +### Images + +Show an image using an exclamation point (`!`), followed by the image's [alt text](https://en.wikipedia.org/wiki/Alt_attribute) (a brief description of the image) wrapped in brackets (`[ ]`), followed by the external URL, or internal path, wrapped in parentheses (`( )`). + +```md +<!-- example.md --> + +![A dozen Krispy Kreme donuts in a box](https://example.com/donuts.jpg) + +![The ArchivesSpace logo](../../../../images/logo.svg) +``` + +:::note[Put images in `src/images`] +All internal images belong in the `src/images` directory. The relative path to images from this file is `../../../images`. +::: + +## YAML frontmatter + +Each content file starts with [YAML](https://yaml.org/) frontmatter: metadata in a block wrapped in triple dashes (`---`). Use the templates below so every field we rely on is set explicitly. For more on how the site build system reads these values, see [Documentation content collection and schema](/about/development#documentation-content-collection-and-schema) and [Blog content collection and schema](/about/development#blog-content-collection-and-schema) on the Development page. + +### Documentation pages + +```md +--- +title: Using MySQL +description: Instructions for how to set up MySQL with ArchivesSpace. +--- +``` + +- **`title`** — Page title shown in the layout, browser tab, and metadata. +- **`description`** — Short summary used for SEO, search, and social previews. + +### Blog posts + +```md +--- +title: v4.2.0 Release Candidate +metaDescription: Early access to ArchivesSpace v4.2.0-RC1 is now available. +teaser: ArchivesSpace <a href="https://github.com/archivesspace/archivesspace/releases/tag/v4.2.0-RC1">v4.2.0-RC1</a> has landed for early testing. +pubDate: 2026-03-20 +authors: + - Pat Doe +updatedDate: 2026-03-21 +--- +``` + +- **`title`** — Post headline on the post page and on the blog index. +- **`metaDescription`** — Short summary for page metadata (SEO) and for the index card when `teaser` is omitted. +- **`teaser`** — Text or HTML for the blog index card (links and light markup are common here). +- **`pubDate`** — Publication date; posts are ordered by this value, newest first. Use an ISO-style date (`YYYY-MM-DD`). +- **`authors`** — List of author names, shown comma-separated on the index and post page. +- **`updatedDate`** — Last-updated date in the same `YYYY-MM-DD` form when the post is revised after publication. + +## Image files + +All internal image files used in Tech Docs content should go in the `src/images` directory, located at the root of this project. + +## Writing conventions + +- Plugins, not plug-ins +- Titles are sentence-case ("Application monitoring with New Relic") +- Documentation page titles prefer '-ing' verb forms ("Using MySQL", "Serving over HTTPS") diff --git a/src/content/docs/ja/about/development.md b/src/content/docs/ja/about/development.md new file mode 100644 index 0000000..40771f9 --- /dev/null +++ b/src/content/docs/ja/about/development.md @@ -0,0 +1,318 @@ +--- +title: Development +description: This page describes how to set up the tech-docs repostory, build the website, update dependencies, and run tests +# This is the last page in the sidebar, so point to Home next instead of +# the Help Center which comes after this page in the sidebar +next: + link: / + label: Home +--- + +Tech Docs is a [Node.js](https://nodejs.org) application, built with [Astro](https://astro.build/) and its [Starlight](https://starlight.astro.build/) documentation site framework. The source code is hosted on [GitHub](https://github.com/archivesspace/tech-docs). The site is statically built and (temporarily) hosted via [Cloudflare Pages](https://pages.cloudflare.com/). Content is written in [Markdown](/about/authoring#markdown). When the source code changes, a new set of static files are generated and published shortly after. + +## Dependencies + +Tech Docs depends on the following open source software (see `.nvmrc` and `package.json` for versions): + +1. [Node.js](https://nodejs.org) - JavaScript development and build environment; the version noted in `.nvmrc` reflects the default version of Node.js in the Cloudflare Pages build image +2. [Astro](https://astro.build/) - Static site generator conceptually based on "components" (React, Vue, Svelte, etc.) rather than "templates" (Jekyll, Handlebars, Pug, etc.) + 1. [Starlight](https://starlight.astro.build/) - Astro plugin and theme for documentation websites + 2. [Sharp](https://sharp.pixelplumbing.com/) - Image transformation library used by Astro +3. [Cypress](https://www.cypress.io/) - End-to-end testing framework +4. [Stylelint](https://stylelint.io/) - CSS linter used locally in text editors and remotely in [CI](#cicd) for testing + 1. [stylelint-config-recommended](https://github.com/stylelint/stylelint-config-recommended) - Base set of lint rules + 2. [postcss-html](https://github.com/ota-meshi/postcss-html) - PostCSS syntax for parsing HTML (and HTML-like including .astro files) + 3. [stylelint-config-html](https://github.com/ota-meshi/stylelint-config-html) - Allows Stylelint to parse .astro files +5. [Prettier](https://prettier.io/) - Source code formatter used locally in text editors and remotely in [CI](#cicd) for testing + 1. [prettier-plugin-astro](https://github.com/withastro/prettier-plugin-astro) - Allows Prettier to parse .astro files via the command line + +## Local development + +Run Tech Docs locally by cloning the Tech Docs repository, installing project dependencies, and spinning up a development server: + +```sh +# Requires git and Node.js + +# Clone Tech Docs and move to it +git clone https://github.com/archivesspace/tech-docs.git +cd tech-docs + +# Install dependencies +npm install + +# Run dev server +npm start +``` + +Now go to [localhost:4321](http://localhost:4321) to see Tech Docs running locally. Changes to the source code will be immediately reflected in the browser. + +### Building the site + +Building the site creates a set of static files, found in `dist` after build, that can be served locally or deployed to a server. Sometimes building the site surfaces errors not found in the development environment. + +```sh +# Build the site and output it to dist/ +npm run build +``` + +:::tip +Serve the built output by running `npm run preview` after a build. +::: + +### Available `npm` scripts + +The following scripts are made available via `package.json`. Invoke any script on the command line from the project root by prepending it with the `npm run` command, ie: `npm run start`. + +- `start` -- run Astro dev server +- `build` -- build Tech Docs for production +- `preview` -- serve the static build +- `astro` -- get Astro help +- `test:dev` -- run tests in development mode +- `test:prod` -- run tests in production mode +- `test` -- defaults to run tests in production mode +- `prettier:check` -- check formatting with Prettier +- `prettier:fix` -- fix possible format errors with Prettier +- `stylelint:check` -- lint CSS with Stylelint +- `stylelint:fix` -- fix possible CSS lint errors with Stylelint + +## Documentation pages + +Documentation pages are implemented with Starlight’s `docs` content collection. Source files are in `src/content/docs/`, and Starlight generates their routes as part of the normal Astro static build output (no separate docs build step). Sidebar hierarchy is configured in `src/siteNavigation.json`. For copy-paste templates and short author-facing field guidance, see [YAML frontmatter](/about/authoring#yaml-frontmatter). + +### Adding documentation pages + +To add a new documentation page: + +1. Create a Markdown file in the appropriate docs section directory under `src/content/docs/`. +2. Add that page to `src/siteNavigation.json` in the correct section and in the correct order so it appears in the sidebar navigation as desired. +3. If the new page becomes the first page in its section, update the corresponding homepage hero link in `src/components/HomePage.astro` so the section link points to the new first page. + +### Legacy `index.md` pages + +Some section directories still contain legacy `index.md` pages from the old Tech Docs site. Those pages can still be routed (for example `/architecture` and `/architecture/index`), but they are not included in the sidebar since they are not listed in `src/siteNavigation.json`. + +### Documentation content collection and schema + +In `src/content.config.ts`, the `docs` collection uses `docsLoader()` and [Starlight’s frontmatter schema](https://starlight.astro.build/reference/frontmatter/) via `docsSchema()`, extended with `issueUrl` and `issueText`. Frontmatter is validated at build time. Starlight requires a `title`; other keys are optional unless your page has a specific need. + +| Field | Required | Purpose | +| ----------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `title` | Yes | Page title in the layout, browser tab, and metadata. | +| `description` | No | Short summary for SEO, search, and social previews. Most pages set this; it is omitted on a few pages (for example [Staff interface](/architecture/frontend), [404](/404)). | +| `slug` | No | Overrides the URL segment instead of deriving it from the file path. | +| `editUrl` | No | Overrides the “Edit page” URL, or `false` to hide the link (for example on [404](/404)). | +| `head` | No | Extra tags for the document head (meta, link, custom title, etc.). | +| `tableOfContents` | No | Table of contents: `false` to hide, or `{ minHeadingLevel, maxHeadingLevel }` to tune range. | +| `template` | No | Starlight layout template (for example `splash`). | +| `hero` | No | Hero area for splash-style pages (`title`, `tagline`, optional `image`, `actions`, etc.). | +| `banner` | No | Optional banner above the page content. | +| `lastUpdated` | No | Override the displayed last-updated date, or `false` to hide it. | +| `prev` | No | Previous pagination link: `false`, a string label, or `{ link, label }`. | +| `next` | No | Next pagination link: `false`, a string label, or `{ link, label }`. For example, [Development](/about/development) sets this so “next” goes to Home instead of the external Help Center entry after it in the sidebar. | +| `pagefind` | No | Set `false` to omit the page from the Pagefind index. | +| `draft` | No | When `true`, exclude the page from production builds. | +| `sidebar` | No | Per-page sidebar label, order, badge, `hidden`, or link `attrs`. The main sidebar structure is configured in `src/siteNavigation.json`. | +| `issueUrl` | No | URL for the footer “report an issue” link, or `false` to hide it. Defaults in `src/content.config.ts` when omitted; authors may set explicitly (see [YAML frontmatter](/about/authoring#yaml-frontmatter)). | +| `issueText` | No | Label text for that footer link. Defaults in `src/content.config.ts` when omitted; authors may set explicitly (see [YAML frontmatter](/about/authoring#yaml-frontmatter)). | + +### Documentation routes + +- URLs are derived from file paths in `src/content/docs/` unless `slug` is set in frontmatter. +- Previous/next pagination is derived from sidebar order unless `prev`/`next` are overridden in frontmatter. + +### Documentation UI components + +| Area | Location | +| -------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- | +| Sidebar hierarchy and grouping | `src/siteNavigation.json` | +| Default docs page title rendering | `src/components/CustomPageTitle.astro` (falls back to Starlight’s default `PageTitle` for non-blog routes) | +| Footer metadata/navigation (edit link, issue link, etc.) | `src/components/overrides/Footer.astro`, `src/components/overrides/EditLink.astro`, `src/components/IssueLink.astro` | + +### Documentation tests + +Documentation-page behavior is covered in Cypress, mainly `cypress/e2e/content-pages.cy.js` (sidebar, table of contents, footer metadata links, and pagination). + +## Blog + +The [blog](/blog) is implemented as an Astro content collection alongside the docs collection. Post source files are in `src/content/blog/`; routes live under `src/pages/blog/`. There is no separate blog build step—blog pages are part of the normal Astro static output, and site search ([Search](#search)) indexes them like other HTML. For where to put files and example frontmatter, see [Authoring content](/about/authoring#where-pages-go) and [YAML frontmatter](/about/authoring#yaml-frontmatter). + +### Adding blog posts + +To add a new blog post, create a new Markdown file in `src/content/blog/` with the required frontmatter fields (`title`, `metaDescription`, `pubDate`, and `authors`). + +Optional fields (`teaser` and `updatedDate`) can also be added as needed. No `src/siteNavigation.json` changes are required for blog posts; valid files in the collection are included automatically when the site builds. + +### Blog content collection and schema + +The `blog` collection is registered in `src/content.config.ts` with a Zod schema. Frontmatter is validated at build time. Adding or renaming frontmatter fields requires updating that schema and every consumer of `entry.data` (blog pages, middleware, and tests). + +| Field | Required | Purpose | +| ----------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `title` | Yes | Post headline on the post page and index card. May include HTML for display; the document `<title>` and prev/next pagination labels **strip HTML** from `title`. | +| `metaDescription` | Yes | Short summary for page meta description (SEO). Used as the index teaser text when `teaser` is omitted. | +| `teaser` | No | HTML or plain text for the blog index card (`set:html`). Prefer this for links or light HTML on the index; plain text in `title` is safest where tab titles and pagination matter. | +| `pubDate` | Yes | Publication date; posts are sorted by this field, newest first. Parsed from frontmatter and formatted for display in **UTC** on the index and post header. | +| `authors` | Yes | Array of author display names; shown comma-separated on the index and post page. | +| `updatedDate` | No | Optional revision date (`YYYY-MM-DD`). Stored in frontmatter but **not shown in the UI** today; useful for future display or consistency with the authoring template. | + +### Blog routes + +- `src/pages/blog/index.astro` — `/blog` index; loads posts, sorts by `pubDate` descending, passes data to the index UI. +- `src/pages/blog/[id].astro` — individual posts; `getStaticPaths` comes from the collection, so new valid posts appear on the next build. + +### Blog route middleware + +`src/blogRouteData.js` is Starlight route middleware for blog routes. It injects `pubDate`, `authors`, and `postTitle` for post pages and sets prev/next pagination (older post as “Previous,” newer as “Next”). Pagination labels use titles with HTML stripped. + +### Blog UI components + +| Area | Location | +| ------------------------------------ | ----------------------------------------------------------------------------- | +| Index list and cards | `src/components/BlogIndex.astro` | +| Index page title | `src/components/BlogIndexTitleHeader.astro` | +| Post title, date, authors, back link | `src/components/BlogPostTitleHeader.astro`, `src/components/BackToBlog.astro` | +| Default vs blog title | `src/components/CustomPageTitle.astro` | +| Header “Blog” link | `src/components/overrides/Header.astro` | +| Blog layout / sidebar behavior | `src/components/overrides/PageFrame.astro` | + +### Blog tests + +End-to-end coverage is in `cypress/e2e/blog.cy.js`. Update these tests when you change blog markup, URLs, or visible behavior. + +## Search + +Site search is a [Starlight feature](https://starlight.astro.build/guides/site-search/): + +> By default, Starlight sites include full-text search powered by [Pagefind](https://pagefind.app/), which is a fast and low-bandwidth search tool for static sites. +> +> No configuration is required to enable search. Build and deploy your site, then use the search bar in the site header to find content. + +:::note +Search only runs in production builds not in the dev server. +::: + +## Theme customization + +Starlight can be customized in various ways, including: + +- [Settings](https://starlight.astro.build/guides/customization/) -- see `astro.config.mjs` +- [CSS](https://starlight.astro.build/guides/css-and-tailwind/) -- see `src/styles/custom.css` +- [Components](https://starlight.astro.build/guides/overriding-components/) -- see `src/components` + +## Static assets + +### Images + +Most image files should be stored in `src/images`. This allows for [processing by Astro](https://docs.astro.build/en/guides/images/) which includes performance optimizations. + +Images that should not be processed by Astro, like favicons, should be stored in `public`. + +:::note[Use `src/images` for all content images] +Put all images used in Tech Docs content in `src/images`. +::: + +### The `public` directory + +Files placed in `public` are not processed by Astro. They are copied directly to the output and made available from the root of the site, so `public/favicon.svg` becomes available at `docs.archivesspace.org/favicon.svg`, while `public/example/slides.pdf` becomes available at `docs.archivesspace.org/example/slides.pdf`. + +## Mermaid diagrams + +Tech Docs supports Mermaid diagrams in both docs and blog content (for authoring syntax, see [Authoring content](/about/authoring#diagrams)). Mermaid is a text-to-diagram tool: authors write diagram definitions in a code fence, and Mermaid turns that text into SVG diagrams in the browser. This differs from regular fenced code blocks that Starlight renders with [Expressive Code](https://expressive-code.com/) as static syntax-highlighted code snippets. + +### Implementation + +1. Runtime logic lives in `src/lib/mermaid.ts`. +2. The runtime is loaded by the Starlight page frame override in `src/components/overrides/PageFrame.astro`. +3. Mermaid fences are post-processed at runtime and rendered as SVG diagrams. + +### Theme behavior + +- Mermaid theme is derived from the site theme (`data-theme` on `<html>`): + - dark mode => Mermaid `dark` + - non-dark modes => Mermaid `default` +- A `MutationObserver` in `src/lib/mermaid.ts` watches for `data-theme` changes and re-renders existing Mermaid diagrams so colors update after theme toggles. +- Mermaid text color is explicitly set in `initializeMermaidRuntime()` bor improved accessibility over its default styles: + - dark mode text: `#fff` + - light mode text: `#000` + +### Maintenance notes + +- If Starlight/Expressive Code markup changes in a future upgrade, update Mermaid selectors/parsing in `src/lib/mermaid.ts` (especially `pre[data-language="mermaid"]` and `.ec-line .code`). +- If layout-level script loading changes, keep `src/components/overrides/PageFrame.astro` loading `src/lib/mermaid.ts` on pages where markdown content appears. +- Keep Cypress coverage updated in `cypress/e2e/mermaid.cy.js` when Mermaid rendering behavior or markup changes. + +## Update npm dependencies + +Run the following commands locally to update the npm dependencies, then push the changes upstream. + +```sh +# List outdated dependencies +npm outdated + +# Update dependencies +npm update +``` + +## Import aliases + +Astro supports [import aliases](https://docs.astro.build/en/guides/imports/#aliases) which provide shortcuts to writing long relative import paths. + +```astro title="src/components/overrides/Example.astro" del="../../images" ins="@images" +--- +import relativeA from '../../images/A_logo.svg' // no alias +import aliasA from '@images/A_logo.svg' // alias +--- +``` + +## Sitemap + +Starlight has built-in [sitemap support](https://starlight.astro.build/guides/customization/#enable-sitemap) which is enabled via the top-level `site` key in `astro.config.mjs`. This key generates `/sitemap-index.xml` and `/sitemap-0.xml` when Tech Docs is [built](#building-the-site), and adds the sitemap link to the `<head>` of every page. `public/robots.txt` also points to the sitemap. + +## Testing + +### End-to-end + +Tech Docs uses [Cypress](https://www.cypress.io/) for end-to-end testing customizations made to the underlying Starlight framework and other project needs. End-to-end tests are located in `cypress/e2e`. + +Run the Cypress tests locally by first building and serving the site: + +```sh +# Build the site +npm run build + +# Serve the build output +npm run preview +``` + +Then **in a different terminal** initiate the tests: + +```sh +# Run the tests +npm test +``` + +### Code style + +Nearly all files in the Tech Docs code base get formatted by [Prettier](https://prettier.io/) to ensure consistent readability and syntax. Run Prettier locally to find format errors and automatically fix them when possible: + +```sh +# Check formatting of .md, .css, .astro, .js, .yml, etc. files +npm run prettier:check + +# Fix any errors that can be overwritten automatically +npm run prettier:fix +``` + +All CSS in .css and .astro files are linted by [Stylelint](https://stylelint.io/) to help avoid errors and enforce conventions. Run Stylelint locally to find lint errors and automatically fix them when possible: + +```sh +# Check all CSS +npm run stylelint:check + +# Fix any errors that can be overwritten automatically +npm run stylelint:fix +``` + +### CI/CD + +Before new changes are accepted into the code base, the [end-to-end](#end-to-end) and [code style](#code-style) tests need to pass. Tech Docs uses [GitHub Actions](https://docs.github.com/en/actions) for its continuous integration and continuous delivery (CI/CD) platform, which automates the testing and deployment processes. The tests are defined in yaml files found in `.github/workflows/` and are run automatically when new changes are proposed. diff --git a/src/content/docs/ja/administration/backup.md b/src/content/docs/ja/administration/backup.md new file mode 100644 index 0000000..688cf61 --- /dev/null +++ b/src/content/docs/ja/administration/backup.md @@ -0,0 +1,160 @@ +--- +title: Backup and recovery +description: Steps, commands, and advice for setting up your ArchivesSpace MySQL database and Solr index. Backups will ensure recovery in case of error or failure. +--- + +## Using the docker configuration package + +### Database backups + +The [Docker configuration package](/administration/docker) includes a mechanism that performs periodic backups of your MySQL database, +using: [databacker/mysql-backup](https://github.com/databacker/mysql-backup). It is by default configured to perform +a dump every two hours. See [configuration](https://github.com/databacker/mysql-backup/blob/master/docs/configuration.md) for more options. + +The automatically created backups are located in the [`backups` directory](/administration/docker/) of the docker configuration package. + +#### When using Docker + +You can explicitly create a dump of your dockerized database while the docker containers are running using the command on your host system shell: + +```shell +docker exec mysql mysqldump -u root -p123456 archivesspace | gzip > /tmp/db.$(date +%F.%H%M%S).sql.gz +``` + +#### When using Docker Desktop + +You can explicitly create a dump of your dockerized database while the docker containers are running using the command on the "Exec" tab of your mysql container: + +```shell +docker exec mysql mysqldump -u root -p123456 archivesspace | gzip > /tmp/db.$(date +%F.%H%M%S).sql.gz +``` + +You can then export the created database dump from the `/tmp` directory of your mysql container using the "Files" tab. + +## Managing your own backups + +Performing regular backups of your MySQL database is critical. ArchivesSpace stores +all of your records data in the database, so as long as you have backups of your +database then you can always recover from errors and failures. + +If you are running MySQL, the `mysqldump` utility can dump the database +schema and data to a file. It's a good idea to run this with the +`--single-transaction` option to avoid locking your database tables +while your backups run. It is also essential to use the `--routines` +flag, which will include functions and stored procedures in the +backup. The `mysqldump` utility is widely used, and there are many tutorials +available. As an example, something like this in your `crontab` would backup your +database twice daily: + +```shell +# Dump archivesspace database 6am and 6pm +30 06,18 * * * mysqldump -u as -pas123 archivesspace | gzip > ~/backups/db.$(date +%F.%H%M%S).sql.gz +``` + +You should store backups in a safe location. + +If you are running with the demo database (NEVER run the demo database in production), +you can create periodic database snapshots using the following configuration settings: + +```ruby +# In this example, we create a snapshot at 4am each day and keep +# 7 days' worth of backups +# +# Database snapshots are written to 'data/demo_db_backups' by +# default. +AppConfig[:demo_db_backup_schedule] = "0 4 \* \* \*" +AppConfig[:demo\_db\_backup\_number\_to\_keep] = 7 +``` + +Solr indexes can always be [recreated](administration/indexes/) from the contents of the +database. For large sites, where recreating the indexes would take too long, it is possible to [backup and restore solr indexes](https://solr.apache.org/guide/solr/latest/deployment-guide/backup-restore.html). +In that case, you also need to backup and restore the files used by the indexers to mark which part of the data is already indexed: + +``` +docker cp archivesspace:/archivesspace/data/indexer_state /tmp/indexer_state +docker cp archivesspace:/archivesspace/data/indexer_pui_state /tmp/indexer_pui_state +``` + +## Creating backups of your database using the provided script + +ArchivesSpace provides simple scripts for windows and unix-like systems for backing up the database to a `.zip` file. + +### When using the embedded demo database + +Note: _NEVER use the demo database in production._. You can run: + +```shell +scripts/backup.sh --output /path/to/backup-yyyymmdd.zip +``` + +and the script will generate a file containing a snapshot of the demo database. + +### When using MySQL + +If you are running against MySQL and have `mysqldump` installed, you +can provide the `--mysqldump` option. This will read the +database settings from your configuration file and add a dump of your +MySQL database to the resulting `.zip` file. + +```shell +scripts/backup.sh --mysqldump --output ~/backups/backup-yyyymmdd.zip +``` + +## Recovering from backup + +When recovering an ArchivesSpace installation from backup, you will +need to restore your database (either the demo database or MySQL). + +After restoring your database, it is recommended to [recreate your solr indexes](administration/indexes/) + +### Recovering your database + +#### When managing your own MySQL + +If you are using MySQL, recovering your database just requires loading +your `mysqldump` backup into an empty database. If you are using the +`scripts/backup.sh` script (described above), this dump file is named +`mysqldump.sql` in your backup `.zip` file. + +To load a MySQL dump file, follow the directions in _Set up your MySQL +database_ to create an empty database with the appropriate +permissions. Then, populate the database from your backup file using +the MySQL client: + +```sql +`mysql -uas -p archivesspace < mysqldump.sql`, where + `as` is the user name + `archivesspace` is the database name + `mysqldump.sql` is the mysqldump filename +``` + +You will be prompted for the password of the user. + +#### When using the demo database + +If you are using the demo database, your backup `.zip` file will +contain a directory called `demo_db_backups`. Each subdirectory of +`demo_db_backups` contains a backup of the demo database. To +restore from a backup, copy its `archivesspace_demo_db` directory back +to your ArchivesSpace data directory. For example: + +```shell +cp -a /unpacked/zip/demo_db_backups/demo_db_backup_1373323208_25926/archivesspace_demo_db \ +/path/to/archivesspace/data/ +``` + +#### When running on Docker + +If you are using the Docker configuration package to run ArchivesSpace you can restore a database dump onto your `archivesspace` MySQL database with the following command on your host system shell: + +```shell +docker exec mysql mysql -uas -pas123 archivesspace < /tmp/db.2025-02-26.164907.sql +``` + +##### When using Docker Desktop + +On docker Desktop, you can import your sql file into the `/tmp/` directrory using the "Files" tab of your mysql container. Afterwards, on the "Exec" tab run the command: + +```shell +gunzip -c /tmp/db.2026-02-17.155254.sql.gz | mysql -u as -pas123 archivesspace +``` diff --git a/src/content/docs/ja/administration/docker.md b/src/content/docs/ja/administration/docker.md new file mode 100644 index 0000000..8488c78 --- /dev/null +++ b/src/content/docs/ja/administration/docker.md @@ -0,0 +1,226 @@ +--- +title: Running with Docker +description: Instructions on setting up, running, and managing an ArchivesSpace installation using Docker. +--- + +## Docker images + +Starting with v4.0.0 ArchivesSpace officially supports using [Docker](https://www.docker.com/) as the easiest way to get up and running. Docker eases installing, upgrading, starting and stopping ArchivesSpace. It also makes it easy to setup ArchivesSpace as a system service that starts automatically on every reboot. + +If you prefer not to use Docker, another (more involved) way to get ArchivesSpace up and running is installing the latest [distribution `.zip` file](/getting_started/zip_distribution). + +ArchivesSpace Docker images are available on the [Docker hub](https://hub.docker.com/u/archivesspace). + +- main application images are built from [this Dockerfile](https://github.com/archivesspace/archivesspace/blob/master/Dockerfile) +- solr images are built from [this Dockerfile](https://github.com/archivesspace/archivesspace/blob/master/solr/Dockerfile) + +## Installing + +### System requirements + +ArchivesSpace on Docker has been tested on Ubuntu Linux, Mac OS X, and Windows. At least 1024 MB RAM are required. We recommend using at least 2 GB for optimal performance. + +### Software Dependencies + +When using Docker, the only software dependency is [Docker](https://www.docker.com/) itself. Follow the [instructions](https://docs.docker.com/get-started/get-docker/) to install the Docker engine. +Optionally installing [Docker Desktop](https://www.docker.com/products/docker-desktop/) provides a graphical way to manage, start and stop your docker containers, easily review the container logs etc. + +### Downloading the configuration package + +To run ArchivesSpace with Docker, first download the ArchivesSpace docker configuration package of the latest release from [github](https://github.com/archivesspace/archivesspace/releases) (scroll down to the "Assets" section of the latest release page and look for the zip file named `archivesspace-docker-${VERSION}.zip`). + +The downloaded configuration package contains a simple yet configurable and production ready docker-based setup intended to run on a single computer. + +### Contents of the configuration package + +Unzipping the downloaded file will create an `archivesspace` directory with the following contents: + +``` +. +├── backups +├── config +│ └── config.rb +├── locales +├── plugins +├── proxy-config +│ └── default.conf +├── sql +├── docker-compose.yml +├── stylesheets +└── .env +``` + +- The `backups` directory is first created once you start the application and it will contain the automatically performed backups of the database. See [Automated Backups section](#automated-database-backups). +- `config/config.rb` file contains the [main configuration](/customization/configuration/) of ArchivesSpace. +- The `locales` directory allows [customization of the UI text](/customization/locales/). +- The `plugins` directory is there to accommodate additional ArchivesSpace [plugins](/customization/plugins/). By default, it contains the [`local`](/customization/plugins/#adding-your-own-branding) and [`lcnaf`](https://github.com/archivesspace-plugins/lcnaf) plugins. +- `proxy-config/default.conf` contains the configuration of the bundled `nginx` see also [proxy configuration](#proxy-configuration). +- In the `sql` directory you can put your `.sql` database dump file to initialize the new database, see [next section](migrating-from-the-zip-distribution-to-docker). +- The `stylesheets` directory contain the files that are used to create PDFs and other files. +- `docker-compose.yml` contains all the information required by Docker to build and run ArchivesSpace. +- `.env` contains configuration of the docker containers including: + - Credentials used by archivespace to access its MySQL database. It is recommended to change the default root and user passwords to something safer. + - The database connection URI which should also be [updated accordingly](/customization/configuration/#database-config) after the database user password is updated in the step above. + +## Migrating from the zip distribution to docker + +If you are currently running ArchivesSpace using the zip file distribution, you can start using Docker instead. + +### Create a backup of your ArchivesSpace instance database + +Use `mysqldump` to create a dump of your MySQL database: + +```shell +mysqldump -uroot -p123456 -h 127.0.0.1 archivesspace > /tmp/db.$(date +%F.%H%M%S).sql +``` + +Follow the steps under the [Backup and recovery](/administration/backup/) section if you need more instructions on how create backups of your MySQL database. + +### Initialize and migrate the database on Docker + +Copy your `.sql` database dump file created above in the `sql` directory of your unzipped Docker configuration package. Make sure the filename includes the `.sql` extension. The file should be in plain text format (not zipped). +Docker will pick it up when it starts for the first time and restore the dump to your new database. + +If you created the dump on an earlier ArchivesSpace version, the system will apply any pending database migrations to upgrade your database to the ArchivesSpace version you are currently running on Docker. + +After the initial run you will want to remove that `.sql` file from the `sql` directory of your unzipped Docker configuration package. + +The docker configuration package already includes a configurable database backup mechanism for MySQL. Read more about it in the [backup and recovery section](/administration/backup/#using-the-docker-configuration-package). + +## Running + +### Resource limits + +We recommend allocating at least 2GB per container for optimal performance. If the host instance is devoted to running ArchivesSpace, it is advisable to configure no memory limit for Docker containers. + +When using Docker Desktop, a default memory limit is set to 50% of your host's memory. To increase the RAM and other resource limits when using Docker Desktop, see [the documentation](https://docs.docker.com/desktop/settings-and-maintenance/settings/#resources). + +When using Docker without Docker Desktop, no memory limit is set by default. See [Docker documenentation](https://docs.docker.com/engine/containers/resource_constraints/) if you need to set limits to the resources used by ArchivesSpace containers. + +### Note on migrating from the zip distribution + +If migrating from the zip distribution to Docker, you most probably have local MySQL and Solr instances running. Starting ArchivesSpace with Docker will start Docker-based MySQL and Solr instances. In order to avoid port binding conflicts, make sure that you stop your local MySQL and Solr instances before proceeding. + +### Start + +Open a terminal, change to the `archivespace` directory that contains the `docker-compose.yml` file and run: + +```shell +docker compose up --detach +``` + +The first time you start ArchivesSpace with Docker, the container images will be downloaded and configuration steps such as database setup and solr index initialization will be performed automatically. +It is expected that the whole process takes up to ten or even more minutes depending on the power of your machine and internet connection speed. **Note** if you are migrating from using the zip distribution to Docker and have already copied a dump of your database in the `sql` directory, initialization of the database and indexing it in solr can take a long time depending on the size of your data. + +Starting with the `--detach` option allows closing the terminal without stopping ArchivesSpace. Viewing the logs of running ArchivesSpace containers is possible in [Docker Desktop](https://www.docker.com/products/docker-desktop/) or in a terminal with: + +```shell +docker compose logs --follow +``` + +Watch the logs for the welcome message: + +``` +2024-12-04 18:42:17 archivesspace | ************************************************************ +2024-12-04 18:42:17 archivesspace | Welcome to ArchivesSpace! +2024-12-04 18:42:17 archivesspace | You can now point your browser to http://localhost:8080 +2024-12-04 18:42:17 archivesspace | ************************************************************ +``` + +Using the default proxy configuration, the Public User interface becomes available at http://localhost/ and the Staff User Interface at: http://localhost/staff/ (default login with: admin / admin) + +You can see the status of your running containers with: + +``` +docker ps +``` + +Which will give a listing like this: + +``` +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +6cd7114c1796 nginx:1.21 "/docker-entrypoint.…" 26 hours ago Up 29 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp proxy +9ed453c46a9f archivesspace/archivesspace:4.0.0 "/archivesspace/star…" 26 hours ago Up 29 minutes (healthy) 8080-8081/tcp, 8089-8090/tcp, 8092/tcp archivesspace +ec71dd3030b7 databack/mysql-backup:latest "/entrypoint dump" 26 hours ago Up 29 minutes db-backup +8b74aa374ec8 archivesspace/solr:4.0.0 "docker-entrypoint.s…" 26 hours ago Up 29 minutes 0.0.0.0:8983->8983/tcp, :::8983->8983/tcp solr +d2cf634744fe mysql:8 "docker-entrypoint.s…" 26 hours ago Up 29 minutes 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp mysql +``` + +If you have also [Docker Desktop](https://www.docker.com/products/docker-desktop/) installed, you can use it to start, stop and manage the ArchivesSpace containers after they have been created for the first time. Docker Desktop does have a built in terminal window that can be used to run Docker commands. + +### Stop + +The following commands need to run from `archivespace` directory that contains the `docker-compose.yml` file. You can stop running containers (without deleting) them with the command: + +```shell +docker compose stop +``` + +They can be started again with: + +```shell +docker compose up --detach +``` + +### Start a shell within a container to run the provided scripts + +You can get a `bash` shell on the container running the archivespace application and run the any of the scripts in the scripts directory with: + +```shell +$ docker exec -it archivesspace bash +archivesspace@9ed453c46a9f:/$ cd archivesspace/scripts/ +archivesspace@9ed453c46a9f:/archivesspace/scripts$ ls +backup.bat backup.sh ead_export.bat ead_export.sh find-base.sh initialize-plugin.bat initialize-plugin.sh password-reset.bat password-reset.sh rb setup-database.bat setup-database.sh +archivesspace@9ed453c46a9f:/archivesspace/scripts$ ./setup-database.sh +NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.base/java.io=ALL-UNNAMED +Loading ArchivesSpace configuration file from path: /archivesspace/config/config.rb +Loading ArchivesSpace configuration file from path: /archivesspace/config/config.rb +Loading ArchivesSpace configuration file from path: /archivesspace/config/config.rb +Detected MySQL connector 8+ +Running migrations against jdbc:mysql://db:3306/archivesspace?useUnicode=true&characterEncoding=UTF-8&user=[REDACTED]&password=[REDACTED]&useSSL=false&allowPublicKeyRetrieval=true&serverTimezone=UTC +All done. +``` + +### Copy files from and to your data directory + +The archivespace `data` directory is not exposed in the Docker Configuration package (as are `locales`, `config`, and `locales` making them easily accessible). This is due to issues we have had on Windows when exposing +the `data` directory instead of using a Docker volume for it. + +If you need to copy files from/to the `data` directory, or any other directory of the archivesspace installation, you can use [`docker cp`](https://docs.docker.com/reference/cli/docker/container/cp/) commands, such as: + +```shell +docker cp archivesspace:/archivesspace/data/indexer_state /tmp/indexer_state +docker cp ~/Desktop/test.png archivesspace:/archivesspace/data +``` + +## Automated database backups + +The Docker configuration package includes a mechanism that will perform periodic backups of your MySQL database, see the [Backup and Recovery](/administration/backup/#backups-when-using-the-docker-configuration-package) for more information. + +## Proxy Configuration + +The Docker configuration package includes an `nginx` based proxy that is by default binding on port 80 of the host machine (see `NGINX_PORT` variable in `.env` file). See `proxy-config/default.conf` and the [nginx docker page](https://hub.docker.com/_/nginx) for more configuration options. + +## Upgrading + +If you are already using the Docker configuration package and upgrading to a newer ArchivesSpace version, [download and extract](#downloading-the-configuration-package) the latest version of the Docker configuration package. + +### With solr configuration / schema changes + +If the ArchivesSpace version you are upgrading to includes solr configuration or schema changes (see the [release notes](https://github.com/archivesspace/archivesspace/releases)), then you need to recreate your solr core and re-index. Change to the `archivespace` directory where you extraced the fresh downloaded Docker configuration package and run: + +```shell +docker compose down solr app +docker volume rm archivesspace_app-data archivesspace_solr-data +docker compose pull +docker compose up -d --build --force-recreate +``` + +### Without solr configuration / schema changes + +If no solr configuration or schema changes are included, change to the extracted `architecture` directory and run: + +```shell +docker compose pull +docker compose up -d --build --force-recreate +``` diff --git a/src/content/docs/ja/administration/getting_started.mdx b/src/content/docs/ja/administration/getting_started.mdx new file mode 100644 index 0000000..5572750 --- /dev/null +++ b/src/content/docs/ja/administration/getting_started.mdx @@ -0,0 +1,143 @@ +--- +title: Getting started +description: Detailed hardware and software requirements for running ArchivesSpace, including instructions on setting up and running an ArchivesSpace instance using the latest distribution .zip file. +--- + +import LatestReleaseBlurb from '@components/LatestReleaseBlurb.astro' + +## The latest release + +<LatestReleaseBlurb /> + +## Two installation methods + +There are two different ways to install ArchivesSpace: + +- Using Docker +- Using the `.zip` file distribution + +### Using Docker + +See the [Running with Docker](/administration/docker/) page for instructions on how to install ArchivesSpace using Docker. + +Starting with ArchivesSpace v4.0.0, the easiest and recommended way to get up and running is using Docker. This method eases installing, upgrading, starting, and stopping ArchivesSpace. It also makes it easy to setup ArchivesSpace as a system service that starts automatically on every reboot. + +### Using the `.zip` file distribution + +The older and more involved way is to install from the latest distribution `.zip` file as described below. + +#### System requirements + +##### Operating system + +ArchivesSpace is being tested on Ubuntu Linux, Mac OS X, and Windows. + +##### Memory + +At least 1024 MB RAM allocated to the application are required. We recommend using at least 2 GB for optimal performance. + +#### Software requirements + +When using the zip distribution, a Java runtime environment and a Solr instance are required. See [using Docker](/administration/docker/) to avoid these dependencies. + +##### Java Runtime Environment + +We recommend using [OpenJDK](https://openjdk.org/projects/jdk/). The following table lists the supported Java versions for each version of ArchivesSpace: + +| ArchivesSpace version | OpenJDK version | +| --------------------- | --------------- | +| ≤ v3.5.1 | 8 or 11 | +| v4.0.0 up to v4.1.1 | 11 or 17 | +| ≥ v4.2.0 | 17 or 21 | + +The Jruby version used in ArchivesSpace v4.2.0 is still compatible with java 11 we highly recommend using Java 17 or 21 as those are the Java versions ArchivesSpace v4.2.0 has been tested with. You can still use java 11 with v4.2.0 but the ArchivesSpace Program Team can provide support for environments using Java versions we have tested ArchivesSpace with (17 or 21). + +Note that in the next major release we expect to drop support for java 17 and only support java 21 and 25. + +##### Solr + +Up to ArchivesSpace v3.1.1, the zip file distribution includes an embedded Solr v4 instance, which is deprecated and not supported anymore. Use the Docker images provided on [ArchivesSpace Docker repository](https://hub.docker.com/orgs/archivesspace/repositories) and see also [using Docker](/administration/docker/) to avoid managing an external Solr instance. + +ArchivesSpace v3.2.0 or above requires an external Solr instance when running using the Zip distribution. The table below summarizes the supported Solr versions for each ArchivesSpace version: + +| ArchivesSpace version | External Solr version | +| --------------------- | ------------------------- | +| ≤ v3.1.1 | no external solr required | +| v3.2.0 up to v3.5.1 | 8 (8.11) | +| v4.0.0 up to v4.1.1 | 9 (9.4.1) | +| ≥ v4.2.0 | 9 (9.9.0) | + +Each ArchivesSpace version is tested for compatibility with the corresponding Solr version listed in the table above. Using the corresponding version of Solr is recommended as that version is being used during development and running the ArchivesSpace automated tests. + +If you need to use ArchivesSpace with an older version of Solr check the [release notes](https://github.com/archivesspace/archivesspace/releases) for any potential version compatibility issues. + +**Note: the ArchivesSpace Program Team can only provide support for Solr deployments +using the "officially" supported version with the standard configuration provided by +the application. Everything else will be treated as "best effort" community-led support.** + +See [Running with external Solr](/provisioning/solr) for more information on installing and upgrading Solr. + +##### Database + +While ArchivesSpace does include an embedded database, MySQL is required for production use. + +(While not officially supported by ArchivesSpace, some community members use MariaDB so there is some community support for version 10.4.10 only.) + +**The embedded database is for testing purposes only. You should use MySQL or MariaDB for any data intended for production, including data in a test instance that you intend to move over to a production instance.** + +All ArchivesSpace versions can run on MySQL version 5.x or 8.x. + +#### Install and run + +Download the distribution `.zip` for your version from [ArchivesSpace releases on GitHub](https://github.com/archivesspace/archivesspace/releases). + +Confirm a supported Java version is active on your PATH: + +```sh +java -version +``` + +Compare the output with [Java Runtime Environment](#java-runtime-environment). If needed, install a supported OpenJDK or point your environment at one (avoid using an unsupported newer Java as the default). + +Extract the `.zip`; it creates a directory named `archivesspace`. Before starting ArchivesSpace, finish provisioning: + +- [MySQL](/provisioning/mysql) +- JDBC driver: [Download MySQL Connector](/provisioning/mysql/#download-mysql-connector) +- External [Solr](/provisioning/solr) when your version requires it (ArchivesSpace v3.2.0 and later on the zip distribution; see [Solr](#solr)) + +**Do not proceed until MySQL and Solr (when required) are running.** + +Start ArchivesSpace from that directory. On Linux and macOS: + +```shell +cd /path/to/archivesspace +./archivesspace.sh +``` + +On Windows: + +```shell +cd \path\to\archivesspace +archivesspace.bat +``` + +This runs ArchivesSpace in the foreground (it stops when you close the terminal). By default, logs are written to `logs/archivesspace.out`. + +**Note:** On Windows, errors such as `unable to resolve type 'size_t'` or `no such file to load -- bundler` often mean the path to the `archivesspace` folder contains spaces. Use a path without spaces. + +##### Verify and sign in + +The first startup can take about a minute. Then confirm the services in a browser: + +- http://localhost:8089/ — backend +- http://localhost:8080/ — staff interface +- http://localhost:8081/ — public interface +- http://localhost:8082/ — OAI-PMH server +- http://localhost:8090/ — Solr admin console + +In the staff interface, sign in with the default administrator account: + +- Username: `admin` +- Password: `admin` + +Create a repository via **System** → **Manage repositories** (top right). From **System** you can manage users and other administration tasks. **Change the default `admin` password before production use.** diff --git a/src/content/docs/ja/administration/index.md b/src/content/docs/ja/administration/index.md new file mode 100644 index 0000000..91ff590 --- /dev/null +++ b/src/content/docs/ja/administration/index.md @@ -0,0 +1,13 @@ +--- +title: Administration basics +description: Index of the administration pages for the tech-docs website. +--- + +- [Getting started](./getting_started) +- [Running ArchivesSpace as a Unix daemon](./unix_daemon) +- [Running ArchivesSpace as a Windows service](./windows) +- [Backup and recovery](./backup) +- [Re-creating indexes](./indexes) +- [Resetting passwords](./passwords) +- [Upgrading](./upgrading) +- [Log rotation](./logrotate) diff --git a/src/content/docs/ja/administration/indexes.md b/src/content/docs/ja/administration/indexes.md new file mode 100644 index 0000000..aef049f --- /dev/null +++ b/src/content/docs/ja/administration/indexes.md @@ -0,0 +1,86 @@ +--- +title: Recreating indexes +description: Steps for performing soft reindexes and full reindexes of Solr, including internal and external Solr. +--- + +There are two strategies for reindexing ArchivesSpace: + +- soft reindex +- full reindex + +## Soft reindex + +A soft reindex updates the existing documents in Solr without directly +touching the actual index documents on the filesystem. This can be done +while the system is running and is suitable for most use cases. + +There are two common ways to perform a soft reindex: + +1. Delete indexer state files + +ArchivesSpace keeps track of what has been indexed by using the files +under `data/indexer_state` and `data/indexer_pui_state` (for the PUI). + +If these files are missing, the indexer assumes that nothing has been +indexed and reindexes everything. To force ArchivesSpace to reindex all +records, just delete the files in `/path/to/archivesspace/data/indexer_state` +and `/path/to/archivesspace/data/indexer_pui_state`. + +You also can do this selectively by record type, for example, to reindex +accessions in repository 2 delete the file called `2_accession.dat`. + +2. Bump `system_mtime` values in the database + +If you update a record's `system_mtime` it becomes eligible for reindexing. + +```sql +#reindex all resources +UPDATE resource SET system_mtime = NOW(); +#reindex resource 1 +UPDATE resource SET system_mtime = NOW() WHERE id = 1; +``` + +## Full reindex + +A full reindex is a complete rebuild of the index from the database. This +may be required if you are having indexer issues, in the case of index +corruption, or if called for by an upgrade owing to changes in ArchivesSpace's +Solr configuration. + +To perform a full reindex: + +### ArchivesSpace <= 3.1.0 (embedded Solr) + +- Shutdown ArchivesSpace +- Delete these directories: + - `rm -rf /path/to/archivesspace/data/indexer_state/` + - `rm -rf /path/to/archivesspace/data/indexer_pui_state/` + - `rm -rf /path/to/archivesspace/data/solr_index/` +- Restart ArchivesSpace + +### ArchivesSpace > 3.1.0 (external Solr) + +For external Solr there is a plugin that can perform all of the re-indexing steps: [aspace-reindexer](https://github.com/lyrasis/aspace-reindexer) + +Manual steps: + +- Shutdown ArchivesSpace +- Delete these directories: + - `rm -rf /path/to/archivesspace/data/indexer_state/` + - `rm -rf /path/to/archivesspace/data/indexer_pui_state/` +- Perform a delete all Solr query: + - `curl -X POST -H 'Content-Type: application/json' --data-binary '{"delete":{"query":"*:*" }}' http://${solrUrl}:${solrPort}/solr/archivesspace/update?commit=true` + - Windows PowerShell: + ``` + Invoke-RestMethod -Uri "http://localhost:8983/solr/archivesspace/update?commit=true" + -Method Post + -ContentType "application/json" + -Body '{"delete":{"query":"*:*"}}' + ``` +- Restart ArchivesSpace + +--- + +You can watch the [Tips for indexing ArchivesSpace](https://www.youtube.com/watch?v=yFJ6yAaPa3A) youtube video to see these steps performed. + +--- diff --git a/src/content/docs/ja/administration/logrotate.md b/src/content/docs/ja/administration/logrotate.md new file mode 100644 index 0000000..d96ce90 --- /dev/null +++ b/src/content/docs/ja/administration/logrotate.md @@ -0,0 +1,28 @@ +--- +title: Log rotation +description: Details an example of how to set up log rotation, which helps keep the ArchivesSpace log file from growing excessively. +--- + +In order to prevent your ArchivesSpace log file from growing excessively, you can set up log rotation. How to set up log rotation is specific to your institution but here is an example logrotate config file with an explanation of what it does. + +`/etc/logrotate.d/` + +``` + /<install location>/archivesspace/logs/archivesspace.out { + daily + rotate 7 + compress + notifempty + missingok + copytruncate + } +``` + +this example configuration file: + +- rotates the logs daily +- keeps 7 days worth of logs +- compresses the logs so they take up less space +- ignores empty logs +- does not report errors if the log file is missing +- creates a copy of the original log file for rotation before truncating the contents of the original file diff --git a/src/content/docs/ja/administration/passwords.md b/src/content/docs/ja/administration/passwords.md new file mode 100644 index 0000000..088336b --- /dev/null +++ b/src/content/docs/ja/administration/passwords.md @@ -0,0 +1,16 @@ +--- +title: Resetting passwords +description: How to run a script that resets a user's password within ArchivesSpace. +--- + +Under the `scripts` directory you will find a script that lets you +reset a user's password. You can invoke it as: + +``` +scripts/password-reset.sh theusername newpassword # or password-reset.bat under Windows +``` + +If you are running against MySQL, you can use this command to set a +password while the system is running. If you are running against the +demo database, you will need to shutdown ArchivesSpace before running +this script. diff --git a/src/content/docs/ja/administration/unix_daemon.md b/src/content/docs/ja/administration/unix_daemon.md new file mode 100644 index 0000000..ba8d9d3 --- /dev/null +++ b/src/content/docs/ja/administration/unix_daemon.md @@ -0,0 +1,60 @@ +--- +title: Running as a Unix daemon +description: Steps for running ArchivesSpace in the background as a daemon using the startup script, and additional info on configuring startup/init settings. +--- + +The `archivesspace.sh` startup script doubles as an init script. If +you run: + +``` +archivesspace.sh start +``` + +ArchivesSpace will run in the background as a daemon (logging to +`logs/archivesspace.out` by default, as before). You can shut it down with: + +``` +archivesspace.sh stop +``` + +You can even install it as a system-wide init script by creating a +symbolic link: + +``` +cd /etc/init.d +ln -s /path/to/your/archivesspace/archivesspace.sh archivesspace +``` + +Note: By default ArchivesSpace will overwrite the log file when restarted. You +can change that by modifying `archivesspace.sh` and changing the `$startup_cmd` +to include double greater than signs: + +``` +$startup_cmd &>> \"$ARCHIVESSPACE_LOGS\" & +``` + +Then use the appropriate tool for your distribution to set up the +run-level symbolic links (such as `chkconfig` for RedHat or +`update-rc.d` for Debian-based distributions). + +Note that you may want to edit archivesspace.sh to set the account +that the system runs under, JVM options, and so on. + +For systems that use systemd you may wish to use a Systemd unit file for ArchivesSpace + +Something similar to this should work: + +``` +[Unit] +Description=ArchivesSpace Application +After=syslog.target network.target +[Service] +Type=forking +ExecStart=/path/to/your/archivesspace/archivesspace.sh start +ExecStop=/path/to/your/archivesspace/archivesspace.sh stop +PIDFile=/path/to/your/archivesspace/archivesspace.pid +User=archivesspace +Group=archivesspace +[Install] +WantedBy=multi-user.target +``` diff --git a/src/content/docs/ja/administration/upgrading.md b/src/content/docs/ja/administration/upgrading.md new file mode 100644 index 0000000..9c5376d --- /dev/null +++ b/src/content/docs/ja/administration/upgrading.md @@ -0,0 +1,183 @@ +--- +title: Upgrading when using the zip distribution +description: Instructions on how to update ArchivesSpace. +--- + +If you have installed ArchivesSpace using the Docker Configuration Package, refer to [upgrading with Docker](/administration/docker/#upgrading). If you have installed ArchivesSpace using the zip distribution, read on! (In case you do not know what the difference is, see the [getting started page](/administration/getting_started/#two-ways-to-get-up-and-running)). + +You can upgrade most versions of ArchivesSpace to a later version using these general instructions. Typically you do not need to progress through other versions of ArchivesSpace to get to a later one, unless there are special considerations for a specific version. Special considerations for these versions are noted here and in release notes. + +- **[Special considerations when upgrading to v1.1.0](/administration/upgrading_1_1_0)** +- **[Special considerations when upgrading to v1.1.1](/administration/upgrading_1_1_1)** +- **[Special considerations when upgrading from v1.4.2 to 1.5.x (these considerations also apply when upgrading from 1.4.2 to any version through 2.0.1)](/administration/upgrading_1_5_0)** +- **[Special considerations when upgrading to 2.1.0](/administration/upgrading_2_1_0)** +- **[Changing to external Solr when upgrading to 3.2.0 or later versions](https://docs.archivesspace.org/provisioning/solr/).** + +## Create a backup of your ArchivesSpace instance + +You should make sure you have a working backup of your ArchivesSpace +installation before attempting an upgrade. Follow the steps +under the [Backup and recovery section](/administration/backup) to do this. + +## Unpack the new version + +It's a good idea to unpack a fresh copy of the version of +ArchivesSpace you are upgrading to. This will ensure that you are +running the latest versions of all files. In the examples below, +replace the lower case x with the version number updating to. For example, +1.5.2 or 1.5.3. + +For example, on Mac OS X or Linux: + +```shell +$ mkdir archivesspace-1.5.x +$ cd archivesspace-1.5.x +$ curl -LJO https://github.com/archivesspace/archivesspace/releases/download/v1.5.x/archivesspace-v1.5.x.zip +$ unzip -x archivesspace-v1.5.x.zip +``` + +( The curl step is optional and simply downloads the distribution from github. You can also +simply download the zip file in your browser and copy it to the directory ) + +On Windows, you can do the same by extracting ArchivesSpace into a new +folder you create in Windows Explorer. + +## Shut down your ArchivesSpace instance + +To ensure you get a consistent copy, you will need to shut down your +running ArchivesSpace instance now. + +## Copy your configuration and data files + +You will need to bring across the following files and directories from +your original ArchivesSpace installation: + +- the `data` directory (see **Indexes note** below) +- the `config` directory (see **Configuration note** below) +- your `lib/mysql-connector*.jar` file (if using MySQL) +- any plugins and local modifications you have installed in your `plugins` directory + +For example, on Mac OS X or Linux: + +```shell +$ cd archivesspace-1.5.x/archivesspace +$ cp -a /path/to/archivesspace-1.4.2/archivesspace/data/* data/ +$ cp -a /path/to/archivesspace-1.4.2/archivesspace/config/* config/ +$ cp -a /path/to/archivesspace-1.4.2/archivesspace/lib/mysql-connector* lib/ +$ cp -a /path/to/archivesspace-1.4.2/archivesspace/plugins/local plugins/ +$ cp -a /path/to/archivesspace-1.4.2/archivesspace/plugins/wonderful_plugin plugins/ +``` + +Or on Windows: + +``` +$ cd archivesspace-1.5.x\archivesspace +$ xcopy \path\to\archivesspace-1.4.2\archivesspace\data\* data /i /k /h /s /e /o /x /y +$ xcopy \path\to\archivesspace-1.4.2\archivesspace\config\* config /i /k /h /s /e /o /x /y +$ xcopy \path\to\archivesspace-1.4.2\archivesspace\lib\mysql-connector* lib /i /k /h /s /e /o /x /y +$ xcopy \path\to\archivesspace-1.4.2\archivesspace\plugins\local plugins\local /i /k /h /s /e /o /x /y +$ xcopy \path\to\archivesspace-1.4.2\archivesspace\plugins\wonderful_plugin plugins\wonderful_plugin /i /k /h /s /e /o /x /y +``` + +Note that you may want to preserve the logs file (`logs/archivesspace.out` +by default) from your previous installation--just in case you need to +refer to it later. + +### Configuration note + +Sometimes a new release of ArchivesSpace will introduce new +configuration settings that weren't present in previous releases. +Before you replace the distribution `config/config.rb` with your +original version, it's a good idea to review the distribution version +to see if there are any new configuration settings of interest. + +Upgrade notes will generally draw attention to any configuration +settings you need to set explicitly, but you never know when you'll +discover a new, exciting feature! Documentation might also refer to +uncommenting configuration options that won't be in your file if you +keep your older version. + +### Indexes note + +Sometimes a new release of ArchivesSpace will require a FULL reindex +which means you do not want to copy over anything from your data directory +to your new release. The data directory contains the indexes created by Solr. +Check the release notes of the new version for any details about reindexing and +the [recreating indexes section](/administration/indexes/) for instructions on recreating indexes. + +## Transfer your locales data + +If you've made modifications to your locales file ( en.yml ) with customized +labels, titles, tooltips, etc., you'll need to transfer those to your new +locale file. + +A good way to do this is to use a Diff tool, like Notepad++, TextMate, or just +Linux diff command: + +```shell +$ diff /path/to/archivesspace-1.4.2/locales/en.yml /path/to/archivesspace-1.5.x/archivesspace/locales/en.yml +$ diff /path/to/archivesspace-1.4.2/locales/enums/en.yml /path/to/archivesspace-v1.5.x/archivesspace/locales/enums/en.yml +``` + +This will show you the differences in your current locales files, as well as the +new additions in the new version locales files. Simply copy the values you wish +to keep from your old ArchivesSpace locales to your new ArchivesSpace locales/provisioning/solr/#copy-the-config-files +files. + +## Run the database migrations + +With everything copied, the final step is to run the database +migrations. This will apply any schema changes and data migrations +that need to happen as a part of the upgrade. To do this, use the +`setup-database` script for your platform. For example, on Mac OS X +or Linux: + +```shell +$ cd archivesspace-1.5.x/archivesspace +$ scripts/setup-database.sh +``` + +Or on Windows: + +```shell +$ cd archivesspace-1.5.x\archivesspace +$ scripts\setup-database.bat +``` + +## Solr configuration updates + +If the release you are upgrading to includes updates in the solr schema or other configuration files (see the release notes) +and you're using external Solr (required beginning with version 3.2.0), you will need to update the solr schema and configuration files +accordingly, by [copying the solr configuration files](/provisioning/solr/#copy-the-config-files) from the release package to your external solr configuration. +See also the [Full instructions for using external Solr with ArchivesSpace](/provisioning/solr). + +## If you've deployed to Tomcat + +The steps to deploy to Tomcat are esentially the same as in the +[archivesspace_tomcat](https://github.com/archivesspace-labs/archivesspace_tomcat) + +But, prior to running your setup-tomcat script, you'll need to be sure to clean out the +any libraries from the previous ASpace version from your Tomcat classpath. + + 1. Stop Tomcat + 2. Unpack your new version of ArchivesSpace + 3. Configure your MySQL database in the config.rb ( just like in the + install instructions ) + 4. Make sure all you other local configuration settings are in your + config.rb file ( check your Tomcat conf/config.rb file for your current + settings. ) + 5. Make sure you MySQL connector jar in the lib directory + 6. Run your setup-database script to migration your database. + 7. Delete all ASpace related jar libraries in your Tomcat's lib directory. These + will include the "gems" folder, as well as "common.jar" and some + [others](https://github.com/archivesspace/archivesspace/tree/master/common/lib). + This will make sure your running the correct version of the dependent + libraries for your new ASpace version. + Just be sure not to delete any of the Apache Tomcat libraries. + 8. Run your setup-tomcat script ( just like in the install instructions ). + This will copy all the files over to Tomcat. + 9. Start Tomcat + +## That's it! + +You can now start your new ArchivesSpace version as normal. diff --git a/src/content/docs/ja/administration/upgrading_1_1_0.md b/src/content/docs/ja/administration/upgrading_1_1_0.md new file mode 100644 index 0000000..868b49f --- /dev/null +++ b/src/content/docs/ja/administration/upgrading_1_1_0.md @@ -0,0 +1,62 @@ +--- +title: Upgrading to 1.1.0 +description: Special considerations when upgrading from ArchivesSpace 1.0.9 or less to 1.1.0, including the option for an external Solr instance. +--- + +Additional upgrade considerations specific to this release. Refer to the [upgrade documentation](/administration/upgrading) for the standard instructions that apply in all cases. + +## External Solr + +--- + +In ArchivesSpace 1.0.9 the default ports configuration was: + +```ruby +AppConfig[:backend_url] = "http://localhost:8089" +AppConfig[:frontend_url] = "http://localhost:8080" +AppConfig[:solr_url] = "http://localhost:8090" +AppConfig[:public_url] = "http://localhost:8081" +``` + +With the introduction of the [optional external Solr instance](/provisioning/solr) functionality this has been updated to: + +```ruby +AppConfig[:backend_url] = "http://localhost:8089" +AppConfig[:frontend_url] = "http://localhost:8080" +AppConfig[:solr_url] = "http://localhost:8090" +AppConfig[:indexer_url] = "http://localhost:8091" # NEW TO 1.1.0 +AppConfig[:public_url] = "http://localhost:8081" +``` + +In most cases the default value for `indexer_url` will blend in seamlessly without you needing to take any action. However, if you modified the original values in your `config.rb` file you may need to update it. Examples: + +**You use a different ports sequence** + +```ruby +AppConfig[:indexer_url] = "http://localhost:9091" +``` + +**You run multiple ArchivesSpace instances on a single host** + +Under this deployment scenario you would have changed port numbers for some (or all) instances in each `config.rb` file, so set the `indexer_url` for each instance as described above. + +**You include hostnames** + +```ruby +AppConfig[:indexer_url] = "http://yourhostname:8091" +``` + +## Clustering + +--- + +In a clustered configuration you may need to edit `instance_[server hostname].rb` files: + +```ruby +{ + ... + :indexer_url => "http://[localhost|yourhostname]:8091", +} +``` + +--- diff --git a/src/content/docs/ja/administration/upgrading_1_1_1.md b/src/content/docs/ja/administration/upgrading_1_1_1.md new file mode 100644 index 0000000..1df7953 --- /dev/null +++ b/src/content/docs/ja/administration/upgrading_1_1_1.md @@ -0,0 +1,58 @@ +--- +title: Upgrading to 1.1.1 +description: Instructions on how to resequence archival object and digital object components within the resource tree and details on a plugin to make PDFs available in the public interface. +--- + +Additional upgrade considerations specific to this release. Refer to the [upgrade documentation](/administration/upgrading) for the standard instructions that apply in all cases. + +## Resequencing of Archival Object & Digital Object Component trees + +--- + +There have been some scenarios in which archival objects and digital object components lose +some of the information used to order their hierarchy. This can result in issues in creation, +editing, or moving items in the tree, since there are database constraints to ensure uniqueness +of certain metadata elements. + +In order to ensure data integrity, there is now method to resequence the trees. This will +not reorder or edit the elements, but simply rebuild all the technical metadata used to establish +the ordering. + +To run the resequencing process, edit the config/config.rb file to have this line: + +```ruby +AppConfig[:resequence_on_startup] = true +``` + +and restart ArchivesSpace. This will trigger a rebuilding process after the application has +started. It's advised to let this rebuild process run its course prior to editing records. +This duration depends on the size of your database, which can take seconds ( for databases with +few Archival and Digital Objects ) to hours ( for databases with hundreds of thousands of records ). +Check your log file to see how the process is going. When it has finished, you should see the application +return to normal operation, generally with only indexer updates being recorded in the log file. + +After you've started ArchivesSpace, be sure to change the config.rb file to have the :resequence_on_startup +set to "false", since you will not need to run this process on every restart. + +## Export PDFs in the Public Interface + +--- + +A common request has been to have a PDF version of the EAD exported in the public application. +This has been a bit problematic, since EAD export has a rather large resource hit on the +database, which is only increased by the added process of PDF creation. We are currently +redesigning part of the ArchivesSpace backend to make PDF creation more user-friendly by +establishing a queue system for exports. + +In the meantime, Mark Cooper at Lyrasis has made a [ Public Metadata Formats plugin ](https://github.com/archivesspace-deprecated/aspace-public-formats) +that exposes certain metadata formats and PDFs in the public UI. This plugin has been included +in this release, but you will need to configure it to expose which formats you would like +to have exposed. Please read the plugin documentation on how to configure this. + +PLEASE NOTE: +Exporting large EAD resources with this plugin will most likely cause some problems. Long requests +will time out, since the server does not want to waste resources on long-running processes. +In addition, a large number of requests for PDFs can cause an increased load on the server. +Please be aware of these plugin issues and limitations before enabling it. + +--- diff --git a/src/content/docs/ja/administration/upgrading_1_5_0.md b/src/content/docs/ja/administration/upgrading_1_5_0.md new file mode 100644 index 0000000..fb5662a --- /dev/null +++ b/src/content/docs/ja/administration/upgrading_1_5_0.md @@ -0,0 +1,147 @@ +--- +title: Upgrading to 1.5.0 +description: Upgrade instructions for upgrading from ArchivesSpace 1.4.2 or lower to 1.5.0, including details on the newest container management feature. +--- + +Additional upgrade considerations specific to this release, which also apply to upgrading from 1.4.2 or lower to any version through 2.0.1. Refer to the [upgrade documentation](/administration/upgrading) for the standard instructions that apply in all cases. + +## General overview + +The upgrade process to the new data model in 1.5.0 requires considerable data transformation and it is important for users to review this document to understand the implications and possible side-effects. + +A quick overview of the steps are: + +1. Review this document and understand how the upgrade will impact your data, paying particular attention to the [Preparation section](#preparation). +2. [Backup your database](/administration/backup). +3. No, really, [backup your database](/administration/backup). +4. It is suggested that [users start with a new solr index](/administration/indexes). To do this, delete the data/solr_index/index directory and all files in the data/indexer_state directory. The embedded version of Solr has been upgraded, which should result in a much more compact index size. +5. Follow the standard [upgrading instructions](/administration/upgrading). Important to note: The setup-database.sh|bat script will modify your database schema, but it will not move the data. If you are currently using the container management plugin you will need to remove it from the list of plugins in your config file prior to starting ArchivesSpace. +6. Start ArchivesSpace. When 1.5.0 starts for the first time, a conversion process will kick off and move the data into the new table structure. **During this time, the application will be unavailable until it completes**. Duration depends on the size of your data and server resources, with a few minutes for very small databases to several hours for very large ones. +7. When the conversion is done, the web application will start and the indexer will rebuild your index. Performance might be slower while the indexer runs, depending on your server environment and available resources. +8. Review the [output of the conversion process](#conversion) following the instructions below. How long it takes for the report to load will depend on the number of entries included in it. + +## Preparing for and Converting to the New Container Management Functionality + +With version 1.5.0, ArchivesSpace is adopting a new data model that will enable more capable and efficient management of the containers in which you store your archival materials. To take advantage of this improved functionality: + +- Repositories already using ArchivesSpace as a production application will need to upgrade their ArchivesSpace applications to the version 1.5.0. (This upgrade / conversion must be done to take advantage of any other new features / bug fixes in ArchivesSpace 1.5.0 or later versions.) +- Repositories not yet using ArchivesSpace in production but needing to migrate data from the Archivists’ Toolkit or Archon will need to migrate their data to version 1.4.2 of ArchivesSpace or earlier and then upgrade that version to version 1.5.0. (This can be done when your repository is ready to migrate to ArchivesSpace.) +- Repositories not yet using ArchivesSpace in production and not needing to migrate data from the Archivists’ Toolkit or Archon can start using Archivists 1.5.0 without the need of upgrading. (People in this situation do not need to read any further.) + +Converting the container data model in version 1.4.2 and earlier versions of ArchivesSpace to the 1.5.0 version has some complexity and may not accommodate all the various ways in which container information has been recorded by diverse repositories. As a consequence, upgrading from a pre-1.5.0 version of ArchivesSpace requires planning for the upgrade, reviewing the results, and, possibly, remediating data either prior to or after the final conversion process. Because of all the variations in which container information can be recorded, it is impossible to know all the ways the data of repositories will be impacted. For this reason, **all repositories upgrading their ArchivesSpace to version 1.5.0 should do so with a backup of their production ArchivesSpace instance and in a test environment.** A conversion may only be undone by reverting back to the source database. + +## Frequently Asked Questions + +_How will my data be converted to the new model?_ + +When your installation is upgraded to 1.5.0, the conversion will happen as part of the upgrade process. + +_Can I continue to use the current model for containers and not convert to the new model?_ + +Because it is such a substantial improvement (see the [new features list](#new-features-in-150) below), the new model is required for all using ArchivesSpace 1.5.0 and higher. The only way to continue using the current model is to never upgrade beyond 1.4.2. + +_What if I’m already using the container management plugin made available to the community by Yale University?_ + +Conversion of data created using the Yale container management plugin, or a local adaptation of the plugin, will also happen as part of the process of upgrading to 1.5.0. Some steps will be skipped when they are not needed. At the end of the process, the new container data model will be integrated into your ArchivesSpace and will not need to be loaded or maintained as a plugin. + +Those currently running the container management plugin will need to remove the container management plugin from the list in your config file prior to starting the conversion or a validation name error will occur. + +_I haven’t moved from Archivists’ Toolkit or Archon yet and am planning to use the associated migration tool. Can I migrate directly to 1.5.0?_ + +No, you must migrate to 1.4.2 or earlier versions and then upgrade your installation to 1.5.0 according to the instructions provided here. + +_What changes are being made to the previous model for containers?_ + +The biggest change is the new concept of top containers. A top container is the highest level container in which a particular instance is stored. Top containers are in some ways analogous to the current Container 1, but broken out from the entire container record (child and grandparent container records). As such, top containers enable more efficient recording and updating of the highest level containers in your collection. + +_How does ArchivesSpace determine what is a top container?_ + +During the conversion, ArchivesSpace will find all the Container 1s in your current ArchivesSpace database. It will then evaluate them as follows: + +- If containers have barcodes, one top container is created for each unique Container 1 barcode. +- If containers do not have barcodes, one top container is created for each unique combination of container 1 indicator and container type 1 within a resource or accession. +- Once a top container is created, additional instance records for the same container within an accession or resource will be linked to that top container record. + +## Preparation + +_What can I do to prepare my ArchivesSpace data for a smoother conversion to top containers?_ + +- If your Container 1s have unique barcodes, you do not need to do anything except verify that your data is complete and accurate. You should run a preliminary conversion as described in the Conversion section and resolve any errors. +- If your Container 1s do not have barcodes, but have a nonduplicative container identifier sequence within each accession or resource (e.g. Box 1, Box 2, Box 3), or the identifiers are only reused within an accession or resource for different types of containers (for example, you have a Box 1 through 10 and an Oversize Box 1 through 3) you do not need to do anything except verify that your data is complete and accurate. You should run a preliminary conversion as described in the Conversion section and resolve any errors. +- If your Container 1s do not have barcodes and you have parallel numbering sequences, where the same indicators and types are used to refer to different containers within the same accession or resource within some or all accessions or resources (for example, you have a Box 1 in series 1 and a different Box 1 in series 5) you will need to find a way to uniquely identify these containers. One option is to run this [barcoder plugin](https://github.com/archivesspace-plugins/barcoder) for each resource to which this applies. The barcoder plugin creates barcodes that combine the ID of the highest level archival object ancestor with the container 1 type and indicator. (The barcoder plugin is designed to run against one resource at a time, instead of against all resources, because not all resources in a repository may match this condition.) Once you’ve differentiated your containers with parallel number sequences, you should run a preliminary conversion as described in the Conversion section and resolve any errors. + +You do not need to make any changes to Container 2 fields or Container 3 fields. Data in these fields will be converted to the new Child and Grandchild container fields that map directly to these fields. + +If you use the current Container Extent fields, these will no longer be available in 1.5.0. Any data in these fields will be migrated to a new Extent sub-record during the conversion. You can evaluate whether this data should remain in an extent record or if it belongs in a container profile or other fields and then move it accordingly after the conversion is complete. + +_I have EADs I still need to import into ArchivesSpace. How can I get them ready for this new model?_ + +If you have a box and folder associated with a component (or any other hierarchical relationship of containers), you will need to add identifiers to the container element so that the EAD importer knows which is the top container. If you previously used Archivists' Toolkit to create EAD, your containers probably already have container identifiers. If your container elements do not have identifiers already, Yale University has made available an [XSLT transformation file](https://github.com/YaleArchivesSpace/xslt-files/blob/master/EAD_add_IDs_to_containers.xsl) to add them. You will need to run it before importing the EAD file into ArchivesSpace. + +## Conversion + +When upgrading from 1.4.2 (and earlier versions) to 1.5.0, the container conversion will happen as part of the upgrade process. You will be able to follow its progress in the log. Instructions for upgrading from a previous version of ArchivesSpace are available at [upgrade documentation](/administration/upgrading). + +Because this is a major change in the data model for this portion of the application, running at least one test conversion is very strongly recommended. Follow these steps to run the upgrade/conversion process: + +- Create a backup of your ArchivesSpace instance to use for testing. **IT IS ESSENTIAL THAT YOU NOT RUN THIS ON A PRODUCTION INSTANCE AS THE CONVERSION CHANGES YOUR DATA, and THE CHANGES CANNOT BE UNDONE EXCEPT BY REVERTING TO A BACKUP VERSION OF YOUR DATA PRIOR TO RUNNING THE CONVERSION.** +- Follow the upgrade instructions to unpack a fresh copy of the v 1.5.0 release made available for testing, copy your configuration and data files, and transfer your locales. +- **It is recommended that you delete your Solr index files to start with a fresh index** We are upgrading the version of Solr that ships with the application, and the upgrade will require a total reindex of your ArchivesSpace data. To do this, delete the data/solr_index/index directory and the files in data/indexer_state. +- Follow the upgrade instructions to run the database migrations. As part of this step, your container data will be converted to the new data model. You can follow along in the log. Windows users can open the archivesspace.out file in a tool like Notepad ++. Mac users can do a tail –f logs/archivesspace.out to get a live update from the log. +- When the test conversion has been completed, the log will indicate "Completed: existing containers have been migrated to the new container model." + +![Image of Conversion Log](../../../../images/ConversionLog.png) + +- Open ArchivesSpace via your browser and login. + Retrieve the container conversion error report from the Background Jobs area: +- Select Background Jobs from the Settings menu. + +![Image of Background Jobs](../../../../images/BackgroundJobs.png) + +- The first item listed under Archived Jobs after completing the upgrade should be container_conversion_job. Click View. + +![Image of Background Jobs List](../../../../images/BackgroundJobsList.png) + +- Under Files, click File to download a CSV file with the errors and a brief explanation. + +![Image of Files](../../../../images/Files.png) + +![Image of Error Report](../../../../images/ErrorReport.png) + +- Go back to your source data and correct any errors that you can before doing another test conversion. +- When the error report shows no errors, or when you are satisfied with the remaining errors, your production instance is ready to be upgraded. +- When the final upgrade/conversion is complete, you can move ArchivesSpace version 1.5.0 into production. + +_What are some common errors or anomalies that will be flagged in the conversion?_ + +- A container with a barcode has different indicators or types in different records. +- A container with a particular type and indicator sometimes has a barcode and sometimes doesn’t. +- A container is missing a type or indicator. +- Container levels are skipped (for example, there is a Container 1 and a Container 3, but no Container 2). +- A container has multiple locations. + +The conversion process can resolve some of these errors for you by supplying or deleting values as it deems appropriate, but for the most control over the process you will most likely want to resolve such issues yourself in your ArchivesSpace database before converting to the new container model. + +_Are there any known conversion issues?_ + +Due to a change in the ArchivesSpace EAD importer in 2015, some EADs with hierarchical containers not designated by a @parent attribute were turned into multiple instance records. This has since been corrected in the application, but we are working on a plugin (now available at [Instance Joiner Plugin](https://github.com/archivesspace-plugins/instance_joiner) that will enable you to turn these back into single instances so that subcontainers are not mistakenly turned into top containers. + +## New features in 1.5.0 + +**Top containers replace Container 1s.** Unlike Container 1s in the current version of ArchivesSpace, top containers in the upcoming version can be defined once and linked many times to various archival objects, resources, and accessions. + +**The ability to create container profiles and associate them with top containers.** Optional container profiles allow you to track information about the containers themselves, including dimensions. + +**Extent calculator.** In conjunction with container profiles, the new extent calculator allows you to easily see extents for accessions, resources, or resource components. Optionally, you can use the calculator to generate extent records for an accession, resource, or resource component. + +**Bulk operations for containers.** The Manage Top Containers area provides more efficient ways to work with multiple containers, including the ability to add or edit barcodes, change locations, and delete top containers in bulk. + +**The ability to "share" boxes across collections in a meaningful way.** You can define top containers separately from individual accessions and resources and access them from multiple accession and resource records. For example, this might be helpful for recording information about an oversize box that contains items from many collections. + +**The ability to store data that will help you synchronize between ArchivesSpace and item records in your ILS.** If your institution creates item records in its ILS for containers, you can now record that information within ArchivesSpace as well. + +**The ability to store data about the restriction status of material associated with a container.** You can now see at a glance whether any portion of the contents of a container is restricted. + +**Machine-actionable restrictions.** You will now have the ability to associate begin and end dates with "conditions governing access" and "conditions governing use" Notes. You'll also be able to associate a local restriction type for non-time-bound restrictions. This gives the ability to better manage and re-describe expiring restrictions. + +For more information on using the new features, consult the user manual, particularly the new section titled Managing Containers (available late April 2016). diff --git a/src/content/docs/ja/administration/upgrading_2_1_0.md b/src/content/docs/ja/administration/upgrading_2_1_0.md new file mode 100644 index 0000000..05b8e8e --- /dev/null +++ b/src/content/docs/ja/administration/upgrading_2_1_0.md @@ -0,0 +1,30 @@ +--- +title: Upgrading to 2.1.0 +description: Instructions on upgrading to ArchivesSpace 2.1.0 if coming from 1.4.2 or below, Archivists' Toolkit or Archon, or if using an external Solr server, in addition to notes on rights statement data migration. +--- + +Additional upgrade considerations specific to this release. Refer to the [upgrade documentation](/administration/upgrading) for the standard instructions that apply in all cases. + +:::note +These considerations also apply when upgrading to any version past 2.1.0 from a version prior to 2.1.0. +::: + +## For those upgrading from 1.4.2 and lower + +Following the merge of the Container Management Plugin in 1.5.0, ArchivesSpace still retained the old container model and had a number of dependencies on it. This imposed unnecessary complexity and some performance degradation on the system. + +In this release all references to the old container model have been removed and the parts of the application that were dependent on it (for example, Imports and Exports) have been refactored to use the new container model. + +A consequence of this change is that if you are upgrading from ArchivesSpace version of 1.4.2 or lower, you will need to first upgrade to any version between 1.5.0 and 2.0.1 to run the container conversion. You will then be able to upgrade to 2.1.0. If you are already using any version of ArchivesSpace between 1.5.0 and 2.0.1, you will be able to upgrade directly to 2.1.0. + +## For those needing to migrate data from Archivists' Toolkit or Archon using the migration tools + +The migration tools are currently supported through version 1.4.2 only. If you want to migrate data to ArchivesSpace using one of these tools, you must migrate it to 1.4.2. From there you can follow the instructions for those upgrading from 1.4.2 and lower. + +## Data migrations in this release + +The rights statements data model has changed in 2.1.0. If you currently use rights statements, your data will be converted to the new model during the setup-database step of the upgrade process. We strongly urge you to backup your database and run at least one test upgrade before putting 2.1.0 into production. + +## For those using an external Solr server + +The index schema has changed with 2.1.0. If you are using an external Solr server, you will need to update the [schema.xml](https://github.com/archivesspace/archivesspace/blob/master/solr/schema.xml) with the newer version. If you are using the default Solr index that ships with ArchivesSpace, no action is needed. diff --git a/src/content/docs/ja/administration/windows.md b/src/content/docs/ja/administration/windows.md new file mode 100644 index 0000000..a34b237 --- /dev/null +++ b/src/content/docs/ja/administration/windows.md @@ -0,0 +1,60 @@ +--- +title: Running as a Windows service +description: Instructions on how to set up ArchivesSpace as a Windows service. +--- + +Running ArchivesSpace as a Windows service requires some additional configuration. + +You can use Apache [procrun](http://commons.apache.org/proper/commons-daemon/procrun.html) to configure ArchivesSpace to run as a Windows service. We have provided a service.bat script that will attempt to configure procrun for you (under `launcher\service.bat`). + +To run this script, first you need to [download procrun](http://www.apache.org/dist/commons/daemon/binaries/windows/). +Extract the files and copy the prunsrv.exe and prunmgr.exe to your ArchivesSpace directory. + +To find the path to Java, "Start" > "Control Panel" > "Java", Select "Java" tab. You'll see the path there. It will look something like `C:\Program Files (x86)\Java` + +You also need to be sure that Java is in your system path and also to create `JAVA_HOME` as a global environment variable. +To add Java to your path, edit you %PATH% environment variable to include the directory of your java executable ( it will be something like `C:\Program Files (x86)\Java` ). To add `JAVA_HOME`, add a new system variable and put the directory where java was installed ( something like `C:\Program Files (x86)\Java` ). + +Environment variables can be found by going to "Start" > "Control Panel", search for environment. Click "edit the system environment variables". In the section "System Variables", find the `PATH` environment variable and select it. Click Edit. If the `PATH` environment variable does not exist, click New. In the Edit System Variable (or New System Variable) window, specify the value of the `PATH` environment variable. Click OK. Close all remaining windows by clicking OK. Do the same for `JAVA_HOME`. + +Before setting up the ArchivesSpace service, you should also [configure ArchivesSpace to run against MySQL](/provisioning/mysql). +Be sure that the MySQL connector jar file is in the lib directory, in order for +the service setup script to add it to the application's classpath. + +Lastly, for the service to shutdown cleanly, uncomment and change these lines in +config/config.rb: + +```ruby +AppConfig[:use_jetty_shutdown_handler] = true +AppConfig[:jetty_shutdown_path] = "/xkcd" +``` + +This enables a shutdown hook for Jetty to respond to when the shutdown action +is taken. + +You can now execute the batch script from your ArchivesSpace root directory from +the command line with `launcher\service.bat`. This will configure the service and +provide two executables: `ArchivesSpaceService.exe` (the service) and +`ArchivesSpaceServicew.exe` (a GUI monitor) + +There are several options to launch the service. The easiest is to open the GUI +monitor and click "Launch". + +Alternatively, you can start the GUI monitor and minimize it in your +system tray with: + +```shell +ArchivesSpaceServicew.exe //MS// +``` + +To execute the service from the command line, you can invoke: + +```shell +ArchivesSpaceService.exe //ES// +``` + +Log output will be placed in your ArchivesSpace log directory. + +Please see the [procrun +documentation](http://commons.apache.org/proper/commons-daemon/procrun.html) +for more information. diff --git a/src/content/docs/ja/api/index.md b/src/content/docs/ja/api/index.md new file mode 100644 index 0000000..3f79dc2 --- /dev/null +++ b/src/content/docs/ja/api/index.md @@ -0,0 +1,486 @@ +--- +title: Working with the API +description: General information about working with the API, including authentication, get, and post requests with examples. +--- + +:::tip +This documentation provides general information on working with the API. For detailed documentation of specific endpoints, see the [API reference](http://archivesspace.github.io/archivesspace/api/), which is maintained separately. +::: + +## Authentication + +Most actions against the backend require you to be logged in as a user +with the appropriate permissions. By sending a request like: + + POST /users/admin/login?password=login + +your authentication request will be validated, and a session token +will be returned in the JSON response for your request. To remain +authenticated, provide this token with subsequent requests in the +`X-ArchivesSpace-Session` header. For example: + + X-ArchivesSpace-Session: 8e921ac9bbe9a4a947eee8a7c5fa8b4c81c51729935860c1adfed60a5e4202cb + +Since not all backend/API end points require authentication, it is best to restrict access to port 8089 to only IP addresses you trust. Your firewall should be used to specify a range of IP addresses that are allowed to call your ArchivesSpace API endpoint. This is commonly called whitelisting or allowlisting. + +### Example requests using CURL + +Send request to authenticate: + +```shell +curl -s -F password="admin" "http://localhost:8089/users/admin/login" +``` + +This will return a JSON response that includes something like the following: + +<!-- prettier-ignore --> +```json +{ + "session":"9528190655b979f00817a5d38f9daf07d1686fed99a1d53dd2c9ff2d852a0c6e", + .... +} +``` + +It’s a good idea to save the session key as an environment variable to use for later requests: + +```shell +#Mac/Unix terminal +export SESSION="9528190655b979f00817a5d38f9daf07d1686fed99a1d53dd2c9ff2d852a0c6e" + +#Windows Command Prompt +set SESSION="9528190655b979f00817a5d38f9daf07d1686fed99a1d53dd2c9ff2d852a0c6e" + +#Windows PowerShell +$env:SESSION="9528190655b979f00817a5d38f9daf07d1686fed99a1d53dd2c9ff2d852a0c6e" +``` + +Now you can make requests like this: + +```shell +curl -H "X-ArchivesSpace-Session: $SESSION" "http://localhost:8089/repositories/2/resources/1 +``` + +## CRUD + +The ArchivesSpace API provides CRUD-style interactions for a number of +different "top-level" record types. Working with records follows a +fairly standard pattern: + + # Get a paginated list of accessions from repository '123' + GET /repositories/123/accessions?page=1 + + # Create a new accession, returning the ID of the new record + POST /repositories/123/accessions + {... a JSON document satisfying JSONModel(:accession) here ...} + + # Get a single accession (returned as a JSONModel(:accession) instance) using the ID returned by the previous request + GET /repositories/123/accessions/456 + + # Update an existing accession + POST /repositories/123/accessions/456 + {... a JSON document satisfying JSONModel(:accession) here ...} + +## Performing API requests + +### GET requests + +#### Resolving associated records + +The :resolve parameter is a way to tell ArchivesSpace to attach the full object to these refs; it is passed in as an +array of keys to "prefetch" in the returned JSON. The object is included in the ref under a \_resolved key. + +For example, to find an archival object by a ref_id and return the found archival object, you can attach +`resolve[]: "archival_objects"` within your request. + +##### Shell Example + +> ```shell +> curl -s -F password="admin" "http://localhost:8089/users/admin/login" +> # Replace "admin" with your password and "http://localhost:8089/users/admin/login" with your ASpace API URL +> # followed by "/users/{your_username}/login" +> +> set SESSION="session_id" +> # If using a unix-like shell, replace set with export +> +> curl -H "X-ArchivesSpace-Session: $SESSION" // +> "http://localhost:8089/repositories/:repo_id:/find_by_id/archival_objects?ref_id[]=hello_im_a_ref_id;resolve[]=archival_objects" +> # Replace "http://localhost:8089" with your ASpace API URL, :repo_id: with the repository ID, +> # "hello_im_a_ref_id" with the ref ID you are searching for, and only add +> # "resolve[]=archival_objects" if you want the JSON for the returned record - otherwise, it will return the +> # record URI only +> ``` + +##### Python Example + +> ```python +> from asnake.client import ASnakeClient # import the ArchivesSnake client +> +> client = ASnakeClient(baseurl="http://localhost:8089", username="admin", password="admin") +> # Replace "http://localhost:8089" with your ArchivesSpace API URL and "admin" for your username and password +> +> client.authorize() # authorizes the client +> +> find_ao_refid = client.get("repositories/:repo_id:/find_by_id/archival_objects", +> params={"ref_id[]": "hello_im_a_ref_id", +> "resolve[]": "archival_objects"}) +> # Replace :repo_id: with the repository ID, "hello_im_a_ref_id" with the ref ID you are searching for, and only add +> # "resolve[]": "archival_objects" if you want the JSON for the returned record - otherwise, it will return the +> # record URI only +> +> print(find_ao_refid.json()) +> # Output (dict): {'archival_objects': [{'ref': '/repositories/2/archival_objects/708425', '_resolved':...}]} +> ``` + +#### Requests for paginated results + +Endpoints that represent groups of objects, rather than single objects, tend to be paginated. Paginated endpoints are called out in the documentation as special, with some version of the following content appearing: +This endpoint is paginated. :page, :id_set, or :all_ids is required + + Integer page – The page set to be returned + Integer page_size – The size of the set to be returned ( Optional. default set in AppConfig ) + Comma separated list id_set – A list of ids to request resolved objects ( Must be smaller than default page_size ) + Boolean all_ids – Return a list of all object ids + +These endpoints support some or all of the following: + + paged access to objects (via :page) + listing all matching ids (via :all_ids) + fetching specific known objects via their database ids (via :id_set) + +##### Shell Example + +> ```shell +> curl -s -F password="admin" "http://localhost:8089/users/admin/login" +> # Replace "admin" with your password and "http://localhost:8089/users/admin/login" with your ASpace API URL +> # followed by "/users/{your_username}/login" +> +> set SESSION="session_id" +> # If using a unix-like shell, replace set with export +> +> # For all archival objects, use all_ids +> curl -H "X-ArchivesSpace-Session: $SESSION" // +> "http://localhost:8089/repositories/2/archival_objects?all_ids=true" +> +> # For a set of archival objects, use id_set +> curl -H "X-ArchivesSpace-Session: $SESSION" // +> "http://localhost:8089/repositories/2/archival_objects?id_set=707458&id_set=707460&id_set=707461" +> +> # For a page of archival objects, use page and page_size +> "http://localhost:8089/repositories/2/archival_objects?page=1&page_size=10" +> ``` + +> Python example needed + +#### Working with long results sets + +When working with search results using page and page_size parameters, many results can be returned and managing those +results can be difficult. See the Python example below for demonstrating how to take a large result set and iterating +through it to search for archival objects from a paginated result. + +##### Python Example + +> ```python +> from asnake.client import ASnakeClient # import the ArchivesSnake client +> +> client = ASnakeClient(baseurl="http://localhost:8089", username="admin", password="admin") +> # Replace http://localhost:8089 with your ArchivesSpace API URL and admin for your username and password +> +> client.authorize() # authorizes the client +> +> # To get a page of archival objects with a set page size, use "page" and "page_size" parameters +> get_repo_aos_pages = client.get("repositories/2/archival_objects", params={"page": 1, "page_size": 10}) +> # Replace 2 for your repository ID. Find this in the URI of your archival object on the bottom right of the +> # Basic Information section in the staff interface +> +> print(get_repo_aos_pages.json()) +> # Output (dictionary): {'first_page': 1, 'last_page': 26949, 'this_page': 1, 'total': 269488, +> # 'results': [{'lock_version': 1, 'position': 0,...]...} +> +> result_count = len(get_repo_aos_pages.json()) # Get us the count of results back +> for result in get_repo_aos_pages.json(): +> json_info = json.loads(result["json"]) +> for key, value in json_info.items(): +> id_match = id_field_regex.match(key) +> ``` + +#### Search requests + +A number of routes in the ArchivesSpace API are designed to search for content across all or part of the records in the +application. These routes make use of Solr, a component bundled with ArchivesSpace and used to provide full text search +over records. + +The search routes present in the application as of this time are: + +- Search this archive +- Search across repositories +- Search this repository +- Search across subjects +- Search for top containers +- Search across location profiles + +Search routes take quite a few different parameters, most of which correspond directly to Solr query parameters. The +most important parameter to understand is q, which is the query sent to Solr. This query is made in Lucene query +syntax. The relevant docs are in the Solr Ref Guide's [The Standard Query Parser](https://solr.apache.org/guide/6_6/the-standard-query-parser.html#the-standard-query-parser) webpage. + +To limit a search to records of a particular type or set of types, you can use the 'type' parameter. This is only +relevant for search endpoints that aren't limited to specific types. Note that type is expected to be a list of types, +even if there is only one type you care about. + +##### Notes on search routes and results + +ArchivesSpace represents records as JSONModel Objects - this is what you get from and send to the system. + +SOLR takes these records, and stores "documents" BASED ON these JSONModel objects in a searchable index. + +Search routes query these documents, NOT the records themselves as stored in the database and represented by JSONModel. + +JSONModel objects and SOLR documents are similar in some ways: + +- Both SOLR documents and JSONModel Objects are expressed in JSON +- In general, documents will always contain some subset of the JSONModel object they represent + +But they also differ in quite a few important ways: + +- SOLR documents don't necessarily have all fields from a JSONModel object +- SOLR documents do not automatically contain nested JSONModel Objects +- SOLR documents can have fields defined that are arbitrary "search representations" of fields in associated records, + or combinations of fields in a record +- SOLR documents don't have a jsonmodel_type field - the jsonmodel_type of the record is stored as primary_type in SOLR + +How do I get the actual JSONModel from a search document? + +In ArchivesSpace, SOLR documents all have a field json, which contains the JSONModel Object the document represents as +a string. You can use a JSON library to parse this string from the field, for example the json library in Python. + +##### Shell Example + +> ```shell +> +> # auto-generated example +> curl -H "X-ArchivesSpace-Session: $SESSION" \ +> "http://localhost:8089/search/repositories?q=&aq=%7B%22jsonmodel_type%22%3D%3E%22advanced_query%22%2C+%22query%22%3D%3E%7B%22jsonmodel_type%22%3D%3E%22boolean_query%22%2C+%22op%22%3D%3E%22AND%22%2C+%22subqueries%22%3D%3E%5B%7B%22jsonmodel_type%22%3D%3E%22date_field_query%22%2C+%22negated%22%3D%3Efalse%2C+%22comparator%22%3D%3E%22empty%22%2C+%22field%22%3D%3E%22QSUC205%22%2C+%22value%22%3D%3E%222018-03-26%22%7D%5D%7D%7D&type%5B%5D=&sort=&facet%5B%5D=&facet_mincount=1&filter=%7B%22jsonmodel_type%22%3D%3E%22advanced_query%22%2C+%22query%22%3D%3E%7B%22jsonmodel_type%22%3D%3E%22boolean_query%22%2C+%22op%22%3D%3E%22AND%22%2C+%22subqueries%22%3D%3E%5B%7B%22jsonmodel_type%22%3D%3E%22date_field_query%22%2C+%22negated%22%3D%3Efalse%2C+%22comparator%22%3D%3E%22empty%22%2C+%22field%22%3D%3E%22QSUC205%22%2C+%22value%22%3D%3E%222018-03-26%22%7D%5D%7D%7D&filter_query%5B%5D=&exclude%5B%5D=&hl=BooleanParam&root_record=&dt=&fields%5B%5D=" +> +> # auto-generated example +> curl -H 'Content-Type: text/json' -H "X-ArchivesSpace-Session: $SESSION" \ +> "http://localhost:8089/search/repositories" \ +> -d '{ +> "aq": { +> "jsonmodel_type": "advanced_query", +> "query": { +> "jsonmodel_type": "boolean_query", +> "op": "AND", +> "subqueries": [ +> { +> "jsonmodel_type": "date_field_query", +> "negated": false, +> "comparator": "empty", +> "field": "QSUC205", +> "value": "2018-03-26" +> } +> ] +> } +> }, +> "facet_mincount": "1", +> "filter": { +> "jsonmodel_type": "advanced_query", +> "query": { +> "jsonmodel_type": "boolean_query", +> "op": "AND", +> "subqueries": [ +> { +> "jsonmodel_type": "date_field_query", +> "negated": false, +> "comparator": "empty", +> "field": "QSUC205", +> "value": "2018-03-26" +> } +> ] +> } +> }, +> "hl": "BooleanParam" +> }' +> ``` + +### POST requests + +#### Updating existing records + +For updating existing records, it's recommended to first do a GET request for the record you want to update. This +ensures that the data you are updating is the most accurate and reduces the chance of inadvertently removing data that +was there previously but may be lost if the data is not included in the subsequent update. After getting the original +record data, you can update it as needed and then do a POST request with the updated data. Make sure that the updated +data is JSON formatted and is passed either through the `-d` or `--data` parameter or `json` parameter if using +ArchivesSnake. + +##### Shell Example + +> ```shell +> curl -s -F password="admin" "http://localhost:8089/users/admin/login" +> # Replace "admin" with your password and "http://localhost:8089" with your ASpace API URL followed by +> # "/users/{your_username}/login" +> +> set SESSION="session_id" +> # If using a unix-like shell, replace set with export +> +> curl -H 'Content-Type: text/json' -H "X-ArchivesSpace-Session: $SESSION" \\ +> "http://localhost:8089/repositories/:repo_id:/groups/:group_id:" \\ +> -d '{"group_code": "test-group_managers", +> "lock_version": 4, +> "description": "Test group managers of the Manuscripts repository", +> "jsonmodel_type": "group", +> "member_usernames": [ +> "manager", "advance"]}' +> # Replace http://localhost:8089 with your ArchivesSpace API URL, :repo_id: with the repository ID number, +> # :group_id: with the group ID number you want to update, and the data found after -d with the data you want +> # updating the group. Be sure to include "lock_version" and the most recent number for it. You can find the +> # most recent lock_version by submitting a get request, like so: curl -H "X-ArchivesSpace-Session: $SESSION" \ +> # "http://localhost:8089/repositories/:repo_id:/groups/:group_id:" +> +> # Output: +> # {"status":"Updated","id":23,"lock_version":5,"stale":null,"uri":"/repositories/2/groups/23","warnings":[]} +> ``` + +##### Python Example + +> ```python +> from asnake.client import ASnakeClient # import the ArchivesSnake client +> client = ASnakeClient(baseurl="http://localhost:8089", username="admin", password="admin") +> # replace http://localhost:8089 with your ArchivesSpace API URL and admin for your username and password +> +> client.authorize() # authorizes the client +> +> get_user_group = client.get("repositories/:repo_id:/groups/:group_id:").json() +> # Retrieve the data from the group you are trying to update. Replace :repo_id: with the repository ID number and +> # :group_id: with the group ID number you want to update +> +> get_user_group["member_usernames"].append("advance") +> # An example of how to modify a group record. For a list of all the fields you can update, +> # print(get_user_group). Here we append a new user 'advance' to the list of users associated with this group. +> +> update_user_group = get_user_group +> # Assign the newly updated get_user_group to update_user_group - to help make it clearer to see. +> +> update_status = client.post("repositories/:repo_id:/groups/:group_id:", json=update_user_group) +> # Replace :repo_id: with the repository ID number and :group_id: with the group ID number you want to update +> +> print(update_status.json()) +> # Output: +> # {'status': 'Updated', 'id': 48, 'lock_version': 1, 'stale': None, 'uri': '/repositories/2/groups/48', +> # 'warnings': []} +> ``` + +#### Creating new records + +When creating new records, it's recommended to do a GET request on the type of record you are wanting to create. This +example record is useful for seeing what fields are included for that specific record. Not all fields are required, for +example, the `created` and `modified` fields are not necessary when creating a new record, since those fields are +handled automatically. Others, such as `title` and `jsonmodel_type` are required. + +After examining an existing record for reference, craft your JSON-formatted data and make a POST request. Make sure +that the new record is passed either through the `-d` or `--data` parameter or `json` parameter if using ArchivesSnake. + +##### Shell Example + +> ```shell +> # Create a new user group within the SHELL +> curl -s -F password="admin" "http://localhost:8089/users/admin/login" +> # Replace "admin" with your password and "http://localhost:8089" with your ASpace API URL followed by +> # "/users/{your_username}/login" +> +> set SESSION="session_id" +> # If using a unix-like shell, replace set with export +> +> curl -H "X-ArchivesSpace-Session: $SESSION" "http://localhost:8089/repositories/:repo_id:/groups/" \\ +> -d '{"group_code": "test-group_managers", +> "description": "Test group managers of the Manuscripts repository", +> "jsonmodel_type": "group"}' +> # Replace "http://localhost:8089" with your ASpace API URL, :repo_id: with the repository ID, and +> # the data found in -d with the metadata you want to create the new user group. +> +> # Output +> # {"status":"Created","id":24,"lock_version":0,"stale":null,"uri":"/repositories/2/groups/24","warnings":[]} +> ``` + +##### Python Example + +> ```python +> # Create a new user group using Python and ArchivesSnake +> from asnake.client import ASnakeClient # import the ArchivesSnake client +> +> client = ASnakeClient(baseurl="http://localhost:8089", username="admin", password="admin") +> # replace http://localhost:8089 with your ArchivesSpace API URL and admin for your username and password +> +> client.authorize() # authorizes the client +> +> new_group = { +> "group_code": "test-group_managers", +> "description": "Test group managers of the Manuscripts repository", +> "jsonmodel_type": "group", +> "member_usernames": [ +> "manager" +> ], +> "grants_permissions": [ +> "cancel_job", +> "manage_enumeration_record"] +> } +> # This is a sample user group that exceeds the minimum requirements. The minimum requirements are: +> # jsonmodel_type, description, and group_code. grants_permissions is optional, these values can be looked up in +> # the ASpace database within the permissions table +> +> post_user_group = client.post("repositories/:repo_id:/groups", json=new_group) +> # Replace :repo_id: with the ArchivesSpace repository ID and new_group with the json data to create a new user +> # group +> +> print(post_user_group.json()) +> # Output: +> # {'status': 'Created', 'id': 23, 'lock_version': 0, 'stale': None, 'uri': '/repositories/2/groups/23', +> # 'warnings': []} +> ``` + +### DELETE requests + +Delete requests using the API permanently deletes any record, just like within the staff interface. Be careful! Make +sure you want to delete the entire record before doing so. If you want to delete parts of a record, for example some +notes or other fields, see [Updating existing records](####Updating existing records). + +To delete a record, retrieve the record's ArchivesSpace generated ID and use the `DELETE` command for SHELL or +`client.delete`if using the ArchivesSnake Python library. + +##### Shell Example + +> ```shell +> # Delete a user group within the SHELL +> curl -s -F password="admin" "http://localhost:8089/users/admin/login" +> # Replace "admin" with your password and "http://localhost:8089" with your ASpace API URL followed by +> # "/users/{your_username}/login" +> +> set SESSION="session_id" +> # If using a unix-like shell, replace set with export +> +> curl -H "X-ArchivesSpace-Session: $SESSION" \\ +> -X DELETE "http://localhost:8089/repositories/:repo_id:/groups/:group_id:" +> # Replace "http://localhost:8089" with your ASpace API URL, :repo_id: with the repository ID, and +> # :group_id: with the ID of the group you want to delete (usually found in the URL of the user group when +> # viewing in the staff interface). Deleting is permanent so make sure to test this first! +> +> # Output: {"status":"Deleted","id":47} +> ``` + +##### Python Example + +> ```python +> # Delete a user group from a repository using Python and ArchivesSnake +> from asnake.client import ASnakeClient # import the ArchivesSnake client +> +> client = ASnakeClient(baseurl="http://localhost:8089", username="admin", password="admin") +> # replace http://localhost:8089 with your ArchivesSpace API URL and admin for your username and password +> +> client.authorize() # authorizes the client +> +> delete_user_group = client.delete("repositories/:repo_id:/groups/:group_id:") +> # Replace :repo_id: with the ArchivesSpace repository ID and :group_id: with the ArchivesSpace ID of the +> # user group you want to delete. Deleting is permanent so make sure to test this first! +> +> print(delete_user_group.json()) +> # Output: {'status': 'Deleted', 'id': 23} +> ``` diff --git a/src/content/docs/ja/architecture/api.md b/src/content/docs/ja/architecture/api.md new file mode 100644 index 0000000..474cf47 --- /dev/null +++ b/src/content/docs/ja/architecture/api.md @@ -0,0 +1,48 @@ +--- +title: API +description: Instructions for how to authenticate when trying to connect to a backend session, such as through the API, along with examples of common requests for getting and posting data. +--- + +:::note +See the [API section](/api/index) for more detailed documentation. +::: + +## Authentication + +Most actions against the backend require you to be logged in as a user +with the appropriate permissions. By sending a request like: + +``` +POST /users/admin/login?password=login +``` + +your authentication request will be validated, and a session token +will be returned in the JSON response for your request. To remain +authenticated, provide this token with subsequent requests in the +`X-ArchivesSpace-Session` header. For example: + +``` +X-ArchivesSpace-Session: 8e921ac9bbe9a4a947eee8a7c5fa8b4c81c51729935860c1adfed60a5e4202cb +``` + +## CRUD + +The ArchivesSpace API provides CRUD-style interactions for a number of +different "top-level" record types. Working with records follows a +fairly standard pattern: + +``` +# Get a paginated list of accessions from repository '123' +GET /repositories/123/accessions?page=1 + +# Create a new accession, returning the ID of the new record +POST /repositories/123/accessions +{... a JSON document satisfying JSONModel(:accession) here ...} + +# Get a single accession (returned as a JSONModel(:accession) instance) using the ID returned by the previous request +GET /repositories/123/accessions/456 + +# Update an existing accession +POST /repositories/123/accessions/456 +{... a JSON document satisfying JSONModel(:accession) here ...} +``` diff --git a/src/content/docs/ja/architecture/archivesspace_architecture.svg b/src/content/docs/ja/architecture/archivesspace_architecture.svg new file mode 100644 index 0000000..e7ded40 --- /dev/null +++ b/src/content/docs/ja/architecture/archivesspace_architecture.svg @@ -0,0 +1,105 @@ +<svg width="100%" viewBox="0 0 680 560" xmlns="http://www.w3.org/2000/svg"> +<defs> +<marker id="arrow" viewBox="0 0 10 10" refX="8" refY="5" markerWidth="6" markerHeight="6" orient="auto-start-reverse"> +<path d="M2 1L8 5L2 9" fill="none" stroke="context-stroke" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/> +</marker> +</defs> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="40" y="22" width="160" height="42" rx="8" stroke-width="0.5" style="fill:rgb(8, 80, 65);stroke:rgb(93, 202, 165);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="120" y="43" text-anchor="middle" dominant-baseline="central" style="fill:rgb(159, 225, 203);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Logged-in users</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="265" y="22" width="150" height="42" rx="8" stroke-width="0.5" style="fill:rgb(68, 68, 65);stroke:rgb(180, 178, 169);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="340" y="43" text-anchor="middle" dominant-baseline="central" style="fill:rgb(211, 209, 199);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Internet</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="480" y="22" width="160" height="42" rx="8" stroke-width="0.5" style="fill:rgb(113, 43, 19);stroke:rgb(240, 153, 123);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="560" y="43" text-anchor="middle" dominant-baseline="central" style="fill:rgb(245, 196, 179);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Anonymous users</text> +</g> + +<line x1="200" y1="43" x2="265" y2="43" stroke="#0F6E56" stroke-width="1.5" fill="none" marker-end="url(#arrow)" style="fill:none;stroke:rgb(15, 110, 86);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<line x1="480" y1="43" x2="415" y2="43" stroke="#993C1D" stroke-width="1.5" fill="none" marker-end="url(#arrow)" style="fill:none;stroke:rgb(153, 60, 29);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<path d="M310,64 C300,108 105,96 105,138" fill="none" stroke="#0F6E56" stroke-width="1.5" marker-end="url(#arrow)" style="fill:none;stroke:rgb(15, 110, 86);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<path d="M370,64 C380,108 547,96 547,138" fill="none" stroke="#993C1D" stroke-width="1.5" marker-end="url(#arrow)" style="fill:none;stroke:rgb(153, 60, 29);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<rect x="15" y="115" width="650" height="145" rx="12" fill="none" stroke="var(--color-border-secondary)" stroke-width="0.5" stroke-dasharray="6 4" style="fill:none;stroke:rgba(222, 220, 209, 0.3);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-dasharray:6px, 4px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="290" y="104" width="100" height="22" rx="11" stroke-width="0.5" style="fill:rgb(12, 68, 124);stroke:rgb(133, 183, 235);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="340" y="115" text-anchor="middle" dominant-baseline="central" style="fill:rgb(181, 212, 244);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Frontend</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="20" y="138" width="170" height="58" rx="8" stroke-width="0.5" style="fill:rgb(12, 68, 124);stroke:rgb(133, 183, 235);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="105" y="155" text-anchor="middle" dominant-baseline="central" style="fill:rgb(181, 212, 244);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Staff UI</text> +<text x="105" y="173" text-anchor="middle" dominant-baseline="central" style="fill:rgb(133, 183, 235);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JRuby · Rails · jQuery</text> +</g> +<line x1="36" y1="192" x2="174" y2="192" stroke="#0F6E56" stroke-width="2" stroke-linecap="round" style="fill:rgb(0, 0, 0);stroke:rgb(15, 110, 86);color:rgb(251, 251, 254);stroke-width:2px;stroke-linecap:round;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="248" y="138" width="170" height="58" rx="8" stroke-width="0.5" style="fill:rgb(12, 68, 124);stroke:rgb(133, 183, 235);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="333" y="158" text-anchor="middle" dominant-baseline="central" style="fill:rgb(181, 212, 244);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Background jobs</text> +<text x="333" y="176" text-anchor="middle" dominant-baseline="central" style="fill:rgb(133, 183, 235);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JRuby · Ruby</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="462" y="138" width="170" height="58" rx="8" stroke-width="0.5" style="fill:rgb(12, 68, 124);stroke:rgb(133, 183, 235);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="547" y="155" text-anchor="middle" dominant-baseline="central" style="fill:rgb(181, 212, 244);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Public UI</text> +<text x="547" y="173" text-anchor="middle" dominant-baseline="central" style="fill:rgb(133, 183, 235);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JRuby · Rails · jQuery</text> +</g> +<line x1="478" y1="192" x2="616" y2="192" stroke="#993C1D" stroke-width="2" stroke-linecap="round" style="fill:rgb(0, 0, 0);stroke:rgb(153, 60, 29);color:rgb(251, 251, 254);stroke-width:2px;stroke-linecap:round;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<line x1="190" y1="167" x2="248" y2="167" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<path d="M105,196 C105,258 80,258 80,330" fill="none" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<path d="M333,196 C333,262 120,262 120,330" fill="none" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<path d="M547,196 C547,268 160,268 160,330" fill="none" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<rect x="15" y="310" width="650" height="115" rx="12" fill="none" stroke="var(--color-border-secondary)" stroke-width="0.5" stroke-dasharray="6 4" style="fill:none;stroke:rgba(222, 220, 209, 0.3);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-dasharray:6px, 4px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="290" y="299" width="100" height="22" rx="11" stroke-width="0.5" style="fill:rgb(8, 80, 65);stroke:rgb(93, 202, 165);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="340" y="310" text-anchor="middle" dominant-baseline="central" style="fill:rgb(159, 225, 203);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Backend</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="50" y="330" width="185" height="68" rx="8" stroke-width="0.5" style="fill:rgb(8, 80, 65);stroke:rgb(93, 202, 165);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="142" y="352" text-anchor="middle" dominant-baseline="central" style="fill:rgb(159, 225, 203);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">ArchivesSpace API</text> +<text x="142" y="369" text-anchor="middle" dominant-baseline="central" style="fill:rgb(93, 202, 165);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JRuby · Sinatra</text> +<text x="142" y="385" text-anchor="middle" dominant-baseline="central" style="fill:rgb(93, 202, 165);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JSONModel</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="435" y="330" width="195" height="68" rx="8" stroke-width="0.5" style="fill:rgb(8, 80, 65);stroke:rgb(93, 202, 165);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="532" y="352" text-anchor="middle" dominant-baseline="central" style="fill:rgb(159, 225, 203);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Indexer</text> +<text x="532" y="369" text-anchor="middle" dominant-baseline="central" style="fill:rgb(93, 202, 165);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JRuby · Sinatra</text> +<text x="532" y="385" text-anchor="middle" dominant-baseline="central" style="fill:rgb(93, 202, 165);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">JSONModel</text> +</g> + +<text x="340" y="346" text-anchor="middle" style="fill:rgb(194, 192, 182);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:auto">monitors updates</text> +<line x1="435" y1="359" x2="235" y2="359" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +<rect x="15" y="450" width="650" height="95" rx="12" fill="none" stroke="var(--color-border-secondary)" stroke-width="0.5" stroke-dasharray="6 4" style="fill:none;stroke:rgba(222, 220, 209, 0.3);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-dasharray:6px, 4px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="290" y="439" width="100" height="22" rx="11" stroke-width="0.5" style="fill:rgb(99, 56, 6);stroke:rgb(239, 159, 39);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="340" y="450" text-anchor="middle" dominant-baseline="central" style="fill:rgb(250, 199, 117);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Storage</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="50" y="462" width="185" height="58" rx="8" stroke-width="0.5" style="fill:rgb(99, 56, 6);stroke:rgb(239, 159, 39);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="142" y="482" text-anchor="middle" dominant-baseline="central" style="fill:rgb(250, 199, 117);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">MySQL</text> +<text x="142" y="500" text-anchor="middle" dominant-baseline="central" style="fill:rgb(239, 159, 39);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">Primary data store</text> +</g> + +<g style="fill:rgb(0, 0, 0);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"> +<rect x="435" y="462" width="195" height="58" rx="8" stroke-width="0.5" style="fill:rgb(99, 56, 6);stroke:rgb(239, 159, 39);color:rgb(251, 251, 254);stroke-width:0.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<text x="532" y="482" text-anchor="middle" dominant-baseline="central" style="fill:rgb(250, 199, 117);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:14px;font-weight:500;text-anchor:middle;dominant-baseline:central">Apache Solr</text> +<text x="532" y="500" text-anchor="middle" dominant-baseline="central" style="fill:rgb(239, 159, 39);stroke:none;color:rgb(251, 251, 254);stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:12px;font-weight:400;text-anchor:middle;dominant-baseline:central">Search index · Java</text> +</g> + +<line x1="142" y1="398" x2="142" y2="462" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> +<line x1="532" y1="398" x2="532" y2="462" marker-end="url(#arrow)" style="fill:none;stroke:rgb(156, 154, 146);color:rgb(251, 251, 254);stroke-width:1.5px;stroke-linecap:butt;stroke-linejoin:miter;opacity:1;font-family:"Anthropic Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif;font-size:16px;font-weight:400;text-anchor:start;dominant-baseline:auto"/> + +</svg> \ No newline at end of file diff --git a/src/content/docs/ja/architecture/backend.md b/src/content/docs/ja/architecture/backend.md new file mode 100644 index 0000000..e44a9ad --- /dev/null +++ b/src/content/docs/ja/architecture/backend.md @@ -0,0 +1,422 @@ +--- +title: Backend +description: Describes the architecture behind the backend of ArchivesSpace, including the main.rb and rest.rb files for initiating ArchivesSpace and defining API mechanisms, controllers, models, nested records, relationships, agents, validation, optimistic concurrency control, and the permissions model. +--- + +The backend is responsible for implementing the ArchivesSpace API, and +supports the sort of access patterns shown in the previous section. +We've seen that the backend must support CRUD operations against a +number of different record types, and those records as expressed as +JSON documents produced from instances of JSONModel classes. + +The following sections describe how the backend fits together. + +## main.rb -- load and initialize the system + +The `main.rb` program is responsible for starting the ArchivesSpace +system: loading all controllers and models, creating +users/groups/permissions as needed, and preparing the system to handle +requests. + +When the system starts up, the `main.rb` program performs the +following actions: + +- Initializes JSONModel--triggering it to load all record schemas + from the filesystem and generate the classes that represent each + record type. +- Connects to the database +- Loads all backend models--the system's domain objects and + persistence layer +- Loads all controllers--defining the system's REST endpoints +- Starts the job scheduler--handling scheduled tasks such as backups + of the demo database (if used) +- Runs the "bootstrap ACLs" process--creates the admin user and + group if they don't already exist; creates the hidden global + repository; creates system users and groups. +- Fires the "backend started" notification to any registered + observers. + +In addition to handling the system startup, `main.rb` also provides +the following facilities: + +- Session handling--tracks authenticated backend sessions using the + token extracted from the `X-ArchivesSpace-Session` request header. +- Helper methods for accessing the current user and current session + of each request. + +## rest.rb -- Request and response handling for REST endpoints + +The `rest.rb` module provides the mechanism used to define the API's +REST endpoints. Each endpoint definition includes: + +- The URI and HTTP request method used to access the endpoint +- A list of typed parameters for that endpoint +- Documentation for the endpoint, each parameter, and each possible + response that may be returned +- Permission checks--predicates that the current user must satisfy + to be able to use the endpoint + +Each controller in the system consists of one or more of these +endpoint definitions. By using the endpoint syntax provided by +`rest.rb`, the controllers can declare the interface they provide, and +are freed of having to perform the sort of boilerplate associated +with request handling--check parameter types, coerce values from +strings into other types, and so on. + +The `main.rb` and `rest.rb` components work together to insulate the +controllers from much of the complexity of request handling. By the +time a request reaches the body of an endpoint: + +- It can be sure that all required parameters are present and of the + correct types. +- The body of the request has been fetched, parsed into the + appropriate type (usually a JSONModel instance--see below) and + made available as a request parameter. +- Any parameters provided by the client that weren't present in the + endpoint definition have been dropped. +- The user's session has been retrieved, and any defined access + control checks have been carried out. +- A connection to the database has been assigned to the request, and + a transaction has been opened. If the controller throws an + exception, the transaction will be automatically rolled back. + +## Controllers + +As touched upon in the previous section, controllers implement the +functionality of the ArchivesSpace API by registering one or more +endpoints. Each endpoint accepts a HTTP request for a given URI, +carries out the request and returns a JSON response (if successful) or +throws an exception (if something goes wrong). + +Each controller lives in its own file, and these can be found in the +`backend/app/controllers` directory. Since most of the request +handling logic is captured by the `rest.rb` module, controllers +generally don't do much more than coordinate the classes from the +model layer and send a response back to the client. + +### crud_helpers.rb -- capturing common CRUD controller actions + +Even though controllers are quite thin, there's still a lot of overlap +in their behaviour. Each record type in the system supports the same +set of CRUD operations, and from the controller's point of view +there's not much difference between an update request for an accession +and an update request for a digital object (for example). + +The `crud_helpers.rb` module pulls this commonality into a set of +helper methods that are invoked by each controller, providing methods +for the standard operations of the system. + +## Models + +The backend's model layer is where the action is. The model layer's +role is to bridge the gap between the high-level JSONModel objects +(complete with their properties, nested records, references to other +records, etc.) and the underlying relational database (via the Sequel +database toolkit). As such, the model layer is mainly concerned with +mapping JSONModel instances to database tables in a way that preserves +everything and allows them to be queried efficiently. + +Each record type has a corresponding model class, but the individual +model definitions are often quite sparse. This is because the +different record types differ in the following ways: + +- The set of properties they allow (and their types, valid values, + etc.) +- The types of nested records they may contain +- The types of relationships they may have with other record types + +The first of these--the set of allowable properties--is already +captured by the JSONModel schema definitions, so the model layer +doesn't have to enforce these restrictions. Each model can simply +take the values supplied by the JSONModel object it is passed and +assume that everything that needs to be there is there, and that +validation has already happened. + +The remaining two aspects _are_ enforced by the model layer, but +generally don't pertain to just a single record type. For example, an +accession may be linked to zero or more subjects, but so can several +other record types, so it doesn't make sense for the `Accession` model +to contain the logic for handling subjects. + +In practice we tend to see very little functionality that belongs +exclusively to a single record type, and as a result there's not much +to put in each corresponding model. Instead, models are generally +constructed by combining a number of mix-ins (Ruby modules) to satisfy +the requirements of the given record type. Features à la carte! + +### ASModel and other mix-ins + +At a minimum, every model includes the `ASModel` mix-in, which provides +base versions of the following methods: + +- `Model.create_from_json` -- Take a JSONModel instance and create a + model instance (a subclass of Sequel::Model) from it. Returns the + instance. +- `model.update_from_json` -- Update the target model instance with + the values from a given JSONModel instance. +- `Model.sequel_to_json` -- Return a JSONModel instance of the appropriate + type whose values are taken from the target model instance. + Model classes are declared to correspond to a particular JSONModel + instance when created, so this method can automatically return a + JSONModel instance of the appropriate type. + +These methods comprise the primary interface of the model layer: +virtually every mix-in in the model layer overrides one or all of +these to add behaviour in a modular way. + +For example, the 'notes' mix-in adds support for multiple notes to be +added to a record type--by mixing this module into a model class, that +class will automatically accept a JSONModel property called 'notes' +that will be stored and retrieved to and from the database as needed. +This works by overriding the three methods as follows: + +- `Model.create_from_json` -- Call 'super' to delegate the creation to + the next mix-in in the chain. When it returns the newly created + object, extract the notes from the JSONModel instance and attach + them to the model instance (saving them in the database). +- `model.update_from_json` -- Call 'super' to save the other updates + to the database, then replace any existing notes entries for the + record with the ones provided by the JSONModel. +- `Model.sequel_to_json` -- Call 'super' to have the next mix-in in + the chain create a JSONModel instance, then pull the stored notes + from the database and poke them into it. + +All of the mix-ins follow this pattern: call 'super' to delegate the +call to the next mix-in in the chain (eventually reaching ASModel), +then manipulate the result to implement the desired behaviour. + +### Nested records + +Some record types, like accessions, digital objects, and subjects, are +_top-level records_, in the sense that they are created independently +of any other record and are addressable via their own URI. However, +there are a number of records that can't exist in isolation, and only +exist in the context of another record. When one record can contain +instances of another record, we call them _nested records_. + +To give an example, the `date` record type is nested within an +`accession` record (among others). When the model layer is asked to +save a JSONModel instance containing nested records, it must pluck out +those records, save them in the appropriate database table, and ensure +that linkages are created within the database to allow them to be +retrieved later. + +This happens often enough that it would be tedious to write code for +each model to handle its nested records, so the ASModel mix-in +provides a declaration to handle this automatically. For example, the +`accession` model uses a definition like: + +```ruby +base.def_nested_record(:the_property => :dates, + :contains_records_of_type => :date, + :corresponding_to_association => :date) +``` + +When creating an accession, this declaration instructs the `Accession` +model to create a database record for each date listed in the "dates" +property of the incoming record. Each of these date records will be +automatically linked to the created accession. + +### Relationships + +A relationship is a link between two top-level records, where the link +is a separate, dynamically generated, model with zero or more +properties of its own. + +For example, the `Event` model can be related to several different +types of records: + +```ruby +define_relationship(:name => :event_link, + :json_property => 'linked_records', + :contains_references_to_types => proc {[Accession, Resource, ArchivalObject]}) +``` + +This declaration generates a custom class that models the relationship +between events and the other record types. The corresponding JSON +schema declaration for the `linked_records` property looks like this: + +```ruby +"linked_records" => { + "type" => "array", + "ifmissing" => "error", + "minItems" => 1, + "items" => { + "type" => "object", + "subtype" => "ref", + "properties" => { + "role" => { + "type" => "string", + "dynamic_enum" => "linked_event_archival_record_roles", + "ifmissing" => "error", + }, + "ref" => { + "type" => [{"type" => "JSONModel(:accession) uri"}, + {"type" => "JSONModel(:resource) uri"}, + {"type" => "JSONModel(:archival_object) uri"}, + ...], + "ifmissing" => "error" + }, + ... +``` + +That is, the property includes URI references to other records, plus +an additional "role" property to indicate the nature of the +relationship. The corresponding JSON might then be: + +```ruby +linked_records: [{ref: '/repositories/123/accessions/456', role: 'authorizer'}, ...] +``` + +The `define_relationship` definition automatically makes use of the +appropriate join tables in the database to store this relationship and +retrieve it later as needed. + +### Agents and `agent_manager.rb` + +Agents present a bit of a representational challenge. There are four +types of agents (person, family, corporate entity, software), and at a +high-level they are structured in the same way: each type can contain +one or more name records, zero or more contact records, and a number +of properties. Records that link to agents (via a relationship, for +example) can link to any of the four types so, in some sense, each +agent type implements a common `Agent` interface. + +However, the agent types differ in their details. Agents contain name +records, but the types of those name records correspond to the type of +the agent: a person agent contains a person name record, for example. +So, in spite of their similarities, the different agents need to be +modelled as separate record types. + +The `agent_manager` module captures the high-level similarities +between agents. Each agent model includes the agent manager mix-in: + +```ruby +include AgentManager::Mixin +``` + +and then defines itself declaratively by the provided class method: + +```ruby +register_agent_type(:jsonmodel => :agent_person, + :name_type => :name_person, + :name_model => NamePerson) +``` + +This definition sets up the properties of that agent. It creates: + +- a one_to_many relationship with the corresponding name + type of the agent. +- a one_to_many relationship with the agent_contact table. +- nested record definition which defines the names list of the agent + (so the list of names for the agent are automatically stored in + and retrieved from the database) +- a nested record definition for contact list of the agent. + +## Validations + +As records are added to and updated within the ArchivesSpace system, +they are validated against a number of rules to make sure they are +well-formed and don't conflict with other records. There are two +types of record validation: + +- Record-level validations check that a record is self-consistent: + that it contains all required fields, that its values are of the + appropriate type and format, and that its fields don't contradict + one another. +- System-level validations check that a record makes sense in a + broader context: that it doesn't share a unique identifier with + another record, and that any record it references actually exists. + +Record-level validations can be performed in isolation, while +system-level records require comparing the record to others in the +database. + +System-level validations need to be implemented in the database itself +(as integrity constraints), but record-level validations are often too +complex to be expressed this way. As a result, validations in +ArchivesSpace can appear in one or both of the following layers: + +- At the JSONModel level, validations are captured by JSON schema + documents. Where more flexibility is needed, custom validations + are added to the `common/validations.rb` file, allowing validation + logic to be expressed using arbitrary Ruby code. +- At the database level, validations are captured using database + constraints. Since the error messages yielded by these + constraints generally aren't useful for users, database + constraints are also replicated in the backend's model layer using + Sequel validations, which give more targeted error messages. + +As a general rule, record-level validations are handled by the +JSONModel validations (either through the JSON schema or custom +validations), while system-level validations are handled by the model +and the database schema. + +## Optimistic concurrency control + +Updating a record using the ArchivesSpace API is a two part process: + +```ruby +# Perform a `GET` against the desired record to fetch its JSON +# representation: + +GET /repositories/5/accessions/2 + +# Manipulate the JSON representation as required, and then `POST` +# it back to replace the original: + +POST /repositories/5/accessions/2 +``` + +If two people do this simultaneously, there's a risk that one person +would silently overwrite the changes made by the other. To prevent +this, every record is marked with a version number that it carries in +the `lock_version` property. When the system receives the updated +copy of a record, it checks that the version it carries is still +current; if the version number doesn't match the one stored in the +database, the update request is rejected and the user must re-fetch +the latest version before applying their update. + +## The ArchivesSpace permissions model + +The ArchivesSpace backend enforces access control, defining which +users are allowed to create, read, update, suppress and delete the +records in the system. The major actors in the permissions model are: + +- Repositories -- The main mechanism for partitioning the + ArchivesSpace system. For example, an instance might contain one + repository for each section of an organisation, or one repository + for each major collection. +- Users -- An entity that uses the system--often a person, but + perhaps a consumer of the ArchivesSpace API. The set of users is + global to the system, and a single user may have access to + multiple repositories. +- Records -- A unit of information in the system. Some records are + global (existing outside of any given repository), while some are + repository-scoped (belonging to a single repository). +- Groups -- A set of users _within_ a repository. Each group is + assigned zero or more permissions, which it confers upon its + members. +- Permissions -- An action that a user can perform. For example, A + user with the `update_accession_record` permission is allowed to + update accessions for a repository. + +To summarize, a user can perform an action within a repository if they +are a member of a group that has been assigned permission to perform +that action. + +### Conceptual trickery + +Since they're repository-scoped, groups govern access to repositories. +However, there are several record types that exist at the top-level of +the system (such as the repositories themselves, subjects and agents), +and the permissions model must be able to accommodate these. + +To get around this, we invent a concept: the "global" repository +conceptually contains the whole ArchivesSpace universe. As with other +repositories, the global repository contains groups, and users can be +made members of these groups to grant them permissions across the +entire system. One example of this is the "admin" user, which is +granted all permissions by the "administrators" group of the global +repository; another is the "search indexer" user, which can read (but +not update or delete) any record in the system. diff --git a/src/content/docs/ja/architecture/database.md b/src/content/docs/ja/architecture/database.md new file mode 100644 index 0000000..37609e0 --- /dev/null +++ b/src/content/docs/ja/architecture/database.md @@ -0,0 +1,554 @@ +--- +title: Database +description: Describes the structure of the ArchivesSpace database, including a breakdown between the main, supporting, subrecord, relationship, enumerations, user-setting-permissions, job, and system tables. It also breaks down the specific fields present in the different tables. +--- + +The ArchivesSpace database stores all data that is created within an ArchivesSpace instance. As described in other sections of this documentation, the backend code - particularly the model layer and `ASModel_crud.rb` file - uses the `Sequel` database toolkit to bridge the gap between this underlying data and the JSON objects which are exchanged by the other components of the system. + +Often, querying the database directly is the most efficient and powerful way to retrieve data from ArchivesSpace. It is also possible to use raw SQL queries to create custom reports that can be run by users in the staff interface. Please consult the [Custom Reports](/customization/reports) section of this documentation for additional information on creating custom reports. + +<!-- .See this [plugin](link-to-plugin) for an example. Also --> + +It is recommended that ArchivesSpace be run against MySQL in production, not the included demo database. Instructions on setting up ArchivesSpace to run against MySQL are [here](/provisioning/mysql). + +The examples in this section are written for MySQL. There are many freely-available tutorials on the internet which can provide guidance to those unfamiliar with MySQL query syntax and the features of the language. + +**NOTE**: the documentation below is current through database schema version 129, application version 2.7.1. + +## Database Overview + +The ArchivesSpace database schema and it's mapping to the JSONModel objects used by the other parts of the system is defined by the files in the `common/schemas` and `backend/models` directories. The database itself is created via the `setup-database` script in the `scripts` directory. This script runs the migrations in the `common/db/migrations` directory. + +The tables in the ArchivesSpace database can be grouped into several general categories: + +- [Database Overview](#database-overview) +- [Main record tables](#main-record-tables) +- [Supporting record tables](#supporting-record-tables) +- [Subrecord tables](#subrecord-tables) +- [Relationship tables](#relationship-tables) +- [Enumerations](#enumerations) +- [User, setting, and permission tables](#user-setting-and-permission-tables) +- [Job tables](#job-tables) +- [System tables](#system-tables) +- [Parent-Child Relationships and Sequencing](#parent-child-relationships-and-sequencing) + - [Repository-scoped records](#repository-scoped-records) + - [Parent/child relationships](#parentchild-relationships) + - [Sequencing](#sequencing) +- [Boolean fields](#boolean-fields) +- [Read-Only Fields](#read-only-fields) + +One way to get a view of all tables and columns in your ArchivesSpace instance is to run the following query in a MySQL client: + +```sql +SELECT TABLE_SCHEMA + , TABLE_NAME + , COLUMN_NAME + , ORDINAL_POSITION + , IS_NULLABLE + , COLUMN_TYPE + , COLUMN_KEY +FROM INFORMATION_SCHEMA.COLUMNS +#change the following value to whatever your database is named +WHERE TABLE_SCHEMA Like 'archivesspace' +``` + +Additionally, a BETA version of an [ArchivesSpace data dictionary](https://github.com/archivesspace/data-dictionary-initial) has been created by members of the ArchivesSpace development team and the ArchivesSpace User Advisory Council Reports team. + +## Main record tables + +These tables hold data about the primary record types in ArchivesSpace. Main record types are distinguished from subrecords in that they have their own persistent URIs - corresponding to their database identifiers/primary keys - that are resolvable via the staff interface, public interface, and API. They are distinguished from supporting records in that they are the primary descriptive record types that users will interact with in the system. + +All of these records, except archival objects, can be created independently of any other record. Archival object records represent components of a larger entity, and so they must have a resource record as a root parent. See the [parent/child relationships](#parent-child-relationships-and-sequencing) section for more information about the representation of hierarchical relationships in the database. + +A few common fields occur in several main record tables. These similar fields are defined by the parent schemas in the `common/schemas` directory: + +| Column Name | Tables | +| ----------------------------------------------- | ---------------------------------------------------------------------------------------- | +| `title` | `accession`, `archival_object`, `digital_object`, `digital_object_component`, `resource` | +| `identifier`/`component_id`/`digital_object_id` | `accession`, `resource`/`archival_object`, `digital_object_component`/`digital_object` | +| `other_level` | `archival_object`, `resource` | +| `repository_processing_note` | `archival_object`, `resource` | + +<!-- Booleans --> + +All of the main records have a set of fields which store boolean values (`0` or `1`) that indicate whether the records are published in the public user interface, suppressed in the staff interface, or have some kind of applicable restriction. The exception to this is the `repository` table, which does not have a restriction boolean, but does have a `hidden` boolean. The `accession` table has multiple restriction-related booleans. See the section below for more information about boolean fields. + +Beginning in version 2.6.0, the main record tables (and some supporting records - see below) also contain fields which hold data about archival resource keys (ARKs) and human-readable URLs (slugs): + +| Column Name | Tables | +| ------------------ | ------------------------------------------------------------------------------------------------------ | +| `slug` | `accession`, `archival_object`, `digital_object`, `digital_object_component`, `repository`, `resource` | +| `external_ark_url` | `archival_object`, `resource` | + +Also stored in these and all other tables are enumeration values, foreign keys which correspond to database identifiers in the `enumeration_value` table, which stores controlled values. See enumeration section below for more detail. + +All subrecord data types - i.e. dates, extents, instances - relating to a main or supporting record are stored in their own tables and linked to main or supporting records via foreign key references in the subrecord tables. See subrecord section below for more detail. + +The remaining data in the main record tables is text, and is unique to each table: + +| TABLE_NAME | COLUMN_NAME | IS_NULLABLE | COLUMN_TYPE | COLUMN_KEY | +| -------------------------- | ------------------------------- | ----------- | ------------ | ---------- | +| `accession` | `content_description` | YES | text | | +| `accession` | `condition_description` | YES | text | | +| `accession` | `disposition` | YES | text | | +| `accession` | `inventory` | YES | text | | +| `accession` | `provenance` | YES | text | | +| `accession` | `general_note` | YES | text | | +| `accession` | `accession_date` | YES | date | | +| `accession` | `retention_rule` | YES | text | | +| `accession` | `access_restrictions_note` | YES | text | | +| `accession` | `use_restrictions_note` | YES | text | | +| `archival_object` | `ref_id` | NO | varchar(255) | MUL | +| `digital_object_component` | `label` | YES | varchar(255) | | +| `repository` | `repo_code` | NO | varchar(255) | UNI | +| `repository` | `name` | NO | varchar(255) | | +| `repository` | `org_code` | YES | varchar(255) | | +| `repository` | `parent_institution_name` | YES | varchar(255) | | +| `repository` | `url` | YES | varchar(255) | | +| `repository` | `image_url` | YES | varchar(255) | | +| `repository` | `contact_persons` | YES | text | | +| `repository` | `description` | YES | text | | +| `repository` | `oai_is_disabled` | YES | int | | +| `repository` | `oai_sets_available` | YES | text | | +| `resource` | `ead_id` | YES | varchar(255) | | +| `resource` | `ead_location` | YES | varchar(255) | | +| `resource` | `finding_aid_title` | YES | text | | +| `resource` | `finding_aid_filing_title` | YES | text | | +| `resource` | `finding_aid_date` | YES | varchar(255) | | +| `resource` | `finding_aid_author` | YES | text | | +| `resource` | `finding_aid_language_note` | YES | varchar(255) | | +| `resource` | `finding_aid_sponsor` | YES | text | | +| `resource` | `finding_aid_edition_statement` | YES | text | | +| `resource` | `finding_aid_series_statement` | YES | text | | +| `resource` | `finding_aid_note` | YES | text | | +| `resource` | `finding_aid_subtitle` | YES | text | | + +<!-- arguably top contsainers should be here, or digital objects should be in the supporting records --> + +## Supporting record tables + +Like the main record types listed above, supporting records can also be created independently of other records, and are addressable in the staff interface and API via their own URI. However, they are primarily meaningful via their many-to-many linkages to the main record types (and, sometimes, other supporting record types). These records typically provide additional information about, or otherwise enhance, the primary record types. A few supporting record types - for instance those in the `term` table - are used to enhance other supporting record types. + +| Supporting module tables | Linked to | +| --------------------------------- | --------------------------------------------------- | +| `agent_corporate_entity` | +| `agent_family` | +| `agent_person` | +| `agent_software` | +| `assessment` | +| `classification` | `accession`, `resource` | +| `classification_term` | `classification`, `accession`, `resource` | +| `container_profile` | `top_container` | +| `event` | +| `location` | +| `location_profile` | `location` | +| `subject` | `resource`, `archival_object` | +| `term` | `subject` | +| `top_container` | +| `vocabulary` | `subject`, `term` | +| `assessment_attribute_definition` | `assessment_attribute`, `assessment_attribute_note` | + +<!-- is this the appropriate place for the assessment attribute def? Vocabulary? --> + +## Subrecord tables + +<!-- link to ### Nested records section of the backend readme --> + +Subrecords must be associated with a main or supporting record - they cannot be created independently. As such, they do not have their own URIs, and can only be accessed via the API by retrieving the top-level record with which they are associated. In the staff interface these records are embedded within main or supporting record views. In the API subrecord data is contained in arrays within main or supporting records. + +The various subrecord types do have their own database tables. In addition to data specific to the subrecord type, the tables also contain foreign key columns which hold the database identifiers of main or supporting records. Subrecord tables must have a value in one of the foreign key fields. Some subrecords can have another subrecord as parent (for instance, the `sub_container` subrecord has `instance_id` as its foreign key column). + +Subrecords exist in a one-to-many relationship with their parent records, so a record's `id` may appear multiple times in a subrecord table (i.e. when there are two dates associated with a resource record). + +It is important to note that subrecords are deleted and recreated upon each save of the main or supporting record with which they are associated, regardless of whether the subrecord itself is modified. This means that the database identifier is deleted and reassigned upon each save. + +| Subrecord tables | Foreign keys | +| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `agent_contact` | `agent_person_id`, `agent_family_id`, `agent_corporate_entity_id`, `agent_software_id` | +| `date` | `accession_id`, `deaccession_id`, `archival_object_id`, `resource_id`, `event_id`, `digital_object_id`, `digital_object_component_id`, `related_agents_rlshp_id`, `agent_person_id`, `agent_family_id`, `agent_corporate_entity_id`, `agent_software_id`, `name_person_id`, `name_family_id`, `name_corporate_entity_id`, `name_software_id` | +| `extent` | `accession_id`, `deaccession_id`, `archival_object_id`, `resource_id`, `digital_object_id`, `digital_object_component_id` | +| `external_document` | `accession_id`, `archival_object_id`, `resource_id`, `subject_id`, `agent_person_id`, `agent_family_id`, `agent_corporate_entity_id`, `agent_software_id`, `rights_statement_id`, `digital_object_id`, `digital_object_component_id`, `event_id` | +| `external_id` | `subject_id`, `accession_id`, `archival_object_id`, `collection_management_id`, `digital_object_id`, `digital_object_component_id`, `event_id`, `location_id`, `resource_id` | +| `file_version` | `digital_object_id`, `digital_object_component_id` | +| `instance` | `resource_id`, `archival_object_id`, `accession_id` | +| `name_authority_id` | `name_person_id`, `name_family_id`, `name_software_id`, `name_corporate_entity_id` | +| `name_corporate_entity` | `agent_corporate_entity_id` | +| `name_family` | `agent_family_id` | +| `name_person` | `agent_person_id` | +| `name_software` | `agent_software_id` | +| `note` | `resource_id`, `archival_object_id`, `digital_object_id`, `digital_object_component_id`, `agent_person_id`, `agent_corporate_entity_id`, `agent_family_id`, `agent_software_id`, `rights_statement_act_id`, `rights_statement_id` | +| `note_persistent_id` | `note_id`, `parent_id` | +| `revision_statement` | `resource_id` | +| `rights_restriction` | `resource_id`, `archival_object_id` | +| `rights_restriction_type` | `rights_restriction_id` | +| `rights_statement` | `accession_id`, `archival_object_id`, `resource_id`, `digital_object_id`, `digital_object_component_id`, `repo_id` | +| `rights_statement_act` | `rights_statement_id` | +| `sub_container` | `instance_id` | +| `telephone` | `agent_contact_id` | +| `user_defined` | `accession_id`, `resource_id`, `digital_object_id` | +| `ark_name` | `archival_object_id`, `resource_id` | +| `assessment_attribute_note` | `assessment_id` | +| `assessment_attribute` | `assessment_id` | +| `lang_material` | `archival_object_id`, `resource_id`, `digital_object_id`, `digital_object_component_id` | +| `language_and_script` | `lang_material_id` | +| `collection_management` | `accession_id`, `resource_id`, `digital_object_id` | +| `location_function` | `location_id` | + +<!-- appropriate place for collection management and deaccession stuff? what about location function? all the rights statement stuff? Is there a specific thing that defines a subrecord as a subrecord? --> + +## Relationship tables + +These tables exist to enable linking between main records and supporting records. Relationship tables are necessary because, unlike subrecord tables, supporting record tables do not include foreign keys which link them to the main record tables. + +Most relationship tables have the `_rlshp` suffix in their names. They typically contain just the primary keys for the tables that are being linked, though a few tables also include fields that are specific to the relationship between the two record types. + +| Relationship/linking tables | Tables linked | +| ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `assessment_reviewer_rlshp` | `assessment` to `agent_person` | +| `assessment_rlshp` | `assessment` to `accession`, `archival_object`, `resource`, or `digital_object` | +| `classification_creator_rlshp` | `classification` to `agent_person`, `agent_family`, `agent_corporate_entity`, or `agent_software` | +| `classification_rlshp` | `classification` or `classification_term` to `resource` or `accession` | +| `classification_term_creator_rlshp` | `classification_term` to `agent_person`, `agent_family`, `agent_corporate_entity`, or `agent_software` | +| `event_link_rlshp` | `event` to `accession`, `resource`, `archival_object`, `digital_object`, `digital_object_component`, `agent_person`, `agent_family`, `agent_corporate_entity`, `agent_software`, or `top_container`. Also includes the `role_id` table, which can be joined with the `enumeration_value` table to return the event role (source, outcome, transfer, context) | +| `instance_do_link_rlshp` | `digital_object` to `instance` | +| `linked_agents_rlshp` | `agent_person`, `agent_software`, `agent_family`, or `agent_corporate_entity` to `accession`, `archival_object`, `digital_object`, `digital_object_component`, `event`, or `resource`. Also includes the `role_id` and `relator_id` tables, which can be joined with the `enumeration_value` table | +| `location_profile_rlshp` | `location` to `location_profile` | +| `owner_repo_rlshp` | `location` to `repository` | +| `related_accession_rlshp` | Links a row in the `accession` table to another row in the `accession` table. Also includes fields for `relator` and relationship type. | +| `related_agents_rlshp` | `agent_person`, `agent_corporate_entity`, `agent_software`, or `agent_family` to other agent tables, or two rows in the same agent table. Also includes fields for `relator` and `description`, and the type of relationship. | +| `spawned_rlshp` | `accession` to `resource`. This contains all linked accession data, even if the resource was not spawned from the accession record. | +| `subject_rlshp` | `subject` to `accession`, `archival_object`, `resource`, `digital_object`, or `digital_object_component` | +| `surveyed_by_rlshp` | `assessment` to `agent_person` | +| `top_container_housed_at_rlshp` | `top_container` to `location`. Also includes fields for `start_date`, `end_date`, `status`, and a free-text `note`. | +| `top_container_link_rlshp` | `top_container` to `sub_container` | +| `top_container_profile_rlshp` | `top_container` to `container_profile` | +| `subject_term` | `subject` to `term` | +| `linked_agent_term` | `linked_agents_rlshp` to `term` | + +<!-- is the assessment definition thing a linking table - it pretty much only has foreign keys + +Same question about one of the rights restriction tables - can't remember which one right now. + --> + +It is not always obvious which relationship tables will provide the desired results. For instance, to get a box list for a given resource record, enter the following query into a MySQL editor: + +```sql +SELECT DISTINCT CONCAT('/repositories/', resource.repo_id, '/resources/', resource.id) as resource_uri + , resource.identifier + , resource.title + , tc.barcode as barcode + , tc.indicator as box_number +FROM sub_container sc +JOIN top_container_link_rlshp tclr on tclr.sub_container_id = sc.id +JOIN top_container tc on tclr.top_container_id = tc.id +JOIN instance on sc.instance_id = instance.id +JOIN archival_object ao on instance.archival_object_id = ao.id +JOIN resource on ao.root_record_id = resource.id +#change to your desired resource id +WHERE resource.id = 4556 +``` + +Sometimes numerous relationship tables must be joined to retrieve the desired results. For instance, to get all boxes and folders for a given resource record, including any container profiles and locations, enter the following query into a MySQL editor: + +```sql +SELECT CONCAT('/repositories/', tc.repo_id, '/top_containers/', tc.id) as tc_uri + , CONCAT('/repositories/', resource.repo_id, '/resources/', resource.id) as resource_uri + , CONCAT('/repositories/', resource.repo_id) as repo_uri + , CONCAT('/repositories/', ao.repo_id, '/archival_objects/', ao.id) as ao_uri + , resource.identifier AS resource_identifier + , resource.title AS resource_title + , ao.display_string AS ao_title + , ev2.value AS level + , tc.barcode AS barcode + , cp.name AS container_profile + , tc.indicator AS container_num + , ev.value AS sc_type + , sc.indicator_2 AS sc_num +from sub_container sc +JOIN top_container_link_rlshp tclr on tclr.sub_container_id = sc.id +JOIN top_container tc on tclr.top_container_id = tc.id +LEFT JOIN top_container_profile_rlshp tcpr on tcpr.top_container_id = tc.id +LEFT JOIN container_profile cp on cp.id = tcpr.container_profile_id +LEFT JOIN top_container_housed_at_rlshp tchar on tchar.top_container_id = tc.id +JOIN instance on sc.instance_id = instance.id +JOIN archival_object ao on instance.archival_object_id = ao.id +JOIN resource on ao.root_record_id = resource.id +LEFT JOIN enumeration_value ev on ev.id = sc.type_2_id +LEFT JOIN enumeration_value ev2 on ev2.id = ao.level_id +#change to your desired resource id +WHERE resource.id = 4223 + +``` + + <!-- Mention the CONCAT function for creating URIs --> + +## Enumerations + +All controlled values used by the application - excluding tool-tips and frontend/public display values and the values that are stored a few of the supporting record tables (see below) - are stored in a table called `enumeration_values`. Controlled values are organized into a variety of parent enumerations (akin to a set of distinct controlled value lists) which are utilized by different record and subrecord types. Parent enumeration data is stored in the `enumeration` table and is linked by foreign key in the `enumeration_id` field in the `enumeration_value` table. In the record and subrecord tables, enumeration values appear as foreign keys in a variety of foreign key columns, usually identified by an `_id` suffix. + +ArchivesSpace comes with a standard set of controlled values, but most of these are modifiable by end-users via the staff interface and API. However, some values in the `enumeration_value` table are read-only - these values define the terminology and data types used in different parts of the application (i.e. the various note types). + +Enumeration IDs appear as foreign keys in a variety of database tables: + +| table_name | column_name | enumeration_name | +| -------------------------- | ---------------------------------- | -------------------------------------------------- | +| `accession` | `acquisition_type_id` | accession_acquisition_type | +| `accession` | `resource_type_id` | accession_resource_type | +| `agent_contact` | `salutation_id` | agent_contact_salutation | +| `archival_object` | `level_id` | archival_record_level | +| `collection_management` | `processing_priority_id` | collection_management_processing_priority | +| `collection_management` | `processing_status_id` | collection_management_processing_status | +| `collection_management` | `processing_total_extent_type_id` | extent_extent_type_id | +| `container_profile` | `dimension_units_id` | dimension_units | +| `date` | `calendar_id` | date_calendar | +| `date` | `certainty_id` | date_certainty | +| `date` | `date_type_id` | date_type | +| `date` | `era_id` | date_era | +| `date` | `label_id` | date_label | +| `deaccession` | `scope_id` | deaccession_scope | +| `digital_object` | `digital_oject_type_id` | digital_object_digital_object_type | +| `digital_object` | `level_id` | digital_object_level | +| `event` | `event_type_id` | event_event_type | +| `event` | `outcome_id` | event_outcome | +| `extent` | `extent_type_id` | extent_extent_type | +| `extent` | `portion_id` | extent_portion | +| `external_document` | `identifier_type_id` | rights_statement_external_document_identifier_type | +| `file_version` | `checksum_method_id` | file_version_checksum_methods | +| `file_version` | `file_format_name_id` | file_version_file_format_name | +| `file_version` | `use_statement_id` | file_version_use_statement | +| `file_version` | `xlink_actuate_attribute_id` | file_version_xlink_actuate_attribute | +| `file_version` | `xlink_show_attribute_id` | file_version_xlink_show_attribute | +| `instance` | `instance_type_id` | instance_instance_type | +| `language_and_script` | `language_id` | +| `language_and_script` | `script_id` | +| `location` | `temporary_id` | location_temporary | +| `location_function` | `location_function_type_id` | location_function_type | +| `location_profile` | `dimension_units_id` | dimension_units | +| `name_corporate_entity` | `rules_id` | name_rule | +| `name_corporate_entity` | `source_id` | name_source | +| `name_family` | `rules_id` | name_rule | +| `name_family` | `source_id` | name_source | +| `name_person` | `name_order_id` | name_person_name_order | +| `name_person` | `rules_id` | name_rule | +| `name_person` | `source_id` | name_source | +| `name_software` | `rules_id` | name_rule | +| `name_software` | `source_id` | name_source | +| `repository` | `country_id` | country_iso_3166 | +| `resource` | `finding_aid_description_rules_id` | resource_finding_aid_description_rules | +| `resource` | `finding_aid_language_id` | +| `resource` | `finding_aid_script_id` | +| `resource` | `finding_aid_status_id` | resource_finding_aid_status | +| `resource` | `level_id` | archival_record_level | +| `resource` | `resource_type_id` | resource_resource_type | +| `rights_restriction_type` | `restriction_type_id` | restriction_type | +| `rights_statement` | `jurisdiction_id` | +| `rights_statement` | `other_rights_basis_id` | rights_statement_other_rights_basis | +| `rights_statement` | `rights_type_id` | rights_statement_rights_type | +| `rights_statement` | `status_id` | +| `rights_statement_act` | `act_type_id` | rights_statement_act_type | +| `rights_statement_act` | `restriction_id` | rights_statement_act_restriction | +| `rights_statement_pre_088` | `ip_status_id` | rights_statement_ip_status | +| `rights_statement_pre_088` | `jurisdiction_id` | +| `rights_statement_pre_088` | `rights_type_id` | rights_statement_rights_type | +| `sub_container` | `type_2_id` | container_type | +| `sub_container` | `type_3_id` | container_type | +| `subject` | `source_id` | subject_source | +| `telephone` | `number_type_id` | telephone_number_type | +| `term` | `term_type_id` | subject_term_type | +| `top_container` | `type_id` | container_type | + +<!-- need to add some rlshp tables which have enums --> + +To translate the enumeration ID that appears in the record and subrecord tables, join the `enumeration_value` table. The table can be joined multiple times if there are multiple values to translate, but you must use an alias for each table. For example: + +```sql +SELECT CONCAT('/repositories/', ao.repo_id, '/archival_objects/', ao.id) as ao_uri + , ao.display_string as ao_title + , date.begin + , date.end + , ev.value as date_label + , ev2.value as date_type + , ev3.value as date_calendar +FROM archival_object ao +LEFT JOIN date on date.archival_object_id = ao.id +LEFT JOIN enumeration_value ev on ev.id = date.label_id +LEFT JOIN enumeration_value ev2 on ev2.id = date.date_type_id +LEFT JOIN enumeration_value ev3 on ev3.id = date.calendar_id +``` + +**NOTE**: `container_profile`, `location_profile`, and `assessment_attribute_definition` records are similar to the records in the `enumeration_value` table in that they store controlled values which are referenced by other parts of the system. However, they differ in that they have their own tables and are addressable via their own URIs. + +## User, setting, and permission tables + +These tables store user and permissions information, user/repository/global preferences, and RDE and custom report templates. + +| Table name | Description | +| ------------------------ | ------------------------------------------------------- | +| `custom_report_template` | Custom report templates | +| `default_values` | Default values settings | +| `group` | Data about permission groups created by each repository | +| `group_permission` | Links the permission table to the group table | +| `group_user` | Links the group table to the user table | +| `oai_config` | Configuration data for OAI-PMH harvesting | +| `permission` | All permission types that can be assigned to users | +| `preference` | User preference data | +| `rde_template` | RDE templates | +| `required_fields` | Contains repository-defined required fields | +| `user` | User data | + +## Job tables + +These tables store data related to background jobs, including imports. + +| Table name | Description | +| --------------------- | ---------------------------------------------------------- | +| `job` | All jobs which have been run in an ArchivesSpace instance. | +| `job_created_record` | Records created via background jobs | +| `job_input_file` | Data about input files used in background jobs | +| `job_modified_record` | Data about records modified via background jobs | + +## System tables + +These tables track actions taken against the database (i.e. edits and deletes), system events, session and authorization data, and database information. These tables are typically not referenced by any other table. + +| Table name | Description | +| ----------------- | --------------------------------------------------------------------------------------------------- | +| `active_edit` | Records being actively edited by a user. Read-only system table | +| `auth_db` | Authentication data for users. Read-only system table | +| `deleted_records` | Records deleted in the past 24 hours. Read-only system table | +| `notification` | Notifications stream. Read-only system table | +| `schema_info` | Contains the database schema version. Read-only system table. | +| `sequence` | The value corresponds to the number of children the archival object has - 1. Read-only system table | +| `session` | Recent session data. Read-only system table | +| `system_event` | System event data. Read-only system table | + +<!-- these are subrecords --> +<!-- | subnote_metadata | +| rights_statement_pre_088 | --> + +## Parent-Child Relationships and Sequencing + +### Repository-scoped records + +Many main and supporting records are scoped to a particular repository. In these tables the parent repository is identified by a foreign key which corresponds to the database identifier in the `repository` table: + +| Column name | Description | Example | Found in | +| ----------- | ---------------------------------------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `repo_id` | The database ID of the parent repository | `12` | `accession`, `archival_object`, `assessment`, `assessment_attribute_definition`, `classification`, `classification_term`, `custom_report_template`, `default_values`, `digital_object`, `digital_object_component`, `event`, `group`, `job`, `preference`, `required_fields`, `resource`, `rights_statement`, `top_container` | + +### Parent/child relationships + +Hierarchical relationships between other records are also expressed through foreign keys: + +| Column name | Description | Example | PK Tables | Found in | +| ---------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ | +| `root_record_id` | The database ID of the root parent record | `4566` | `resource`, `digital_object`, `classification` | `archival_object`, `digital_object_component`, `classification_term` | +| `parent_id` | The database ID of the immediate parent record. This is used to identify parent records which are of the same type as the child record (i.e. two archival object records). The value will be NULL if the only parent is the root record. | `1748121` | `archival_object`, `classification_term`, `digital_object_component` | `archival_object`, `classification_term`, `digital_object_component`, `note_persistent_id` | +| `parent_name` | The database ID or URI, and the record type of the immediate parent | `144@archival_object`, `root@/repositories/2/resources/2` | `resource`, `archival_object`, `classification`, `classification_term`, `digital_object`, `digital_object_component` | `archival_object`, `classification_term`, `digital_object_component` | + +Beginning with MySQL 8, you can recursively retrieve all parents of an archival object (or all archival objects linked to a resource) by running the following query: + +```sql +WITH RECURSIVE ao_path AS + (SELECT ao1.id + , ao1.display_string + , ao1.component_id + , ao1.parent_id + , ev.value as `ao_level` + , 1 as level + FROM archival_object ao1 + LEFT JOIN enumeration_value ev on ev.id = ao1.level_id + WHERE ao1.id = <your ao id> + <!-- to get all trees for a resource change to: WHERE ao1.root_record_id = <your root_record_id> --> + UNION ALL + SELECT ao2.id + , ao2.display_string + , ao2.component_id + , ao2.parent_id + , ev.value as `ao_level` + , ao_path.level + 1 as level + FROM ao_path + JOIN archival_object ao2 on ao_path.parent_id = ao2.id + LEFT JOIN enumeration_value ev on ev.id = ao2.level_id) + SELECT GROUP_CONCAT(CONCAT(display_string, ' ', ' (', CONCAT(UPPER(SUBSTRING(ao_level,1,1)),LOWER(SUBSTRING(ao_level,2))), ' ', IF(component_id is not NULL, CAST(component_id as CHAR), "N/A"), ')') ORDER BY level DESC SEPARATOR ' > ') as tree + FROM ao_path; + +``` + +To retrieve all children (MySQL 8+): + +To retrieve both parents and children (MySQL 8+): + +To retrieve all parents of a record in MySQL 5.7 and below, run the following query: + +```sql +SELECT (SELECT GROUP_CONCAT(CONCAT(display_string, ' (', ao_level, ')') SEPARATOR ' < ') as parent_path + FROM (SELECT T2.display_string as display_string + , ev.value as ao_level + FROM (SELECT @r AS _id + , @p := @r AS previous + , (SELECT @r := parent_id FROM archival_object WHERE id = _id) AS parent_id + , @l := @l + 1 AS lvl + FROM ((SELECT @r := 1749840, @p := 0, @l := 0) AS vars, + archival_object h) + WHERE @r <> 0 AND @r <> @p) AS T1 + JOIN archival_object T2 ON T1._id = T2.id + LEFT JOIN enumeration_value ev on ev.id = T2.level_id + WHERE T2.id != 1749840 + ORDER BY T1.lvl DESC) as all_parents) as p_path + , ao.display_string + , CONCAT('/repositories/', ao.repo_id, '/archival_objects/', ao.id) as uri +FROM archival_object ao +WHERE ao.id = 1749840 +``` + +To retrieve all children of a record (MysQL 5.7 and below): + +```sql + +``` + +### Sequencing + +The ordering of records in a `resource`, `classification`, or `digital_object` tree is determined by the `position` field. The position field is also used to order values in the `enumeration_value` and `assessment_attribute_definition` tables: + +| Column name | Description | Example | Found in | +| ----------- | -------------------------------------------------------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------- | +| `position` | The position of the archival object under the immediate parent | `168000` | `enumeration_value`, `assessment_attribute_definition`, `classification_term`, `digital_object_component`, `archival_object` | + +## Boolean fields + +Many records and subrecords include fields which contain integers (`0` or `1`) corresponding to boolean values. + +| Boolean fields | Description | Found in | +| -------------------- | ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `publish` | | `subnote_metadata`, `file_version`, `external_document`, `accession`, `classification`, `agent_person`, `agent_family`, `agent_software`, `agent_corporate_entity`, `classification_term`, `revision_statement`, `repository`, `note`, `digital_object`, `digital_object_component`, `archival_object`, `resource` | +| `suppressed` | | `accession`, `archival_object`, `assessment_reviewer_rlshp`, `assessment_rlshp`, `classification`, `classification_creator_rlshp`, `classification_rlshp`, `classification_term`, `classification_term_creator_rlshp`, `digital_object`, `digital_object_component`, `enumeration_value`, `event`, `event_link_rlshp`, `instance_do_link_rlshp`, `linked_agents_rlshp`, `location_profile_rlshp`, `owner_repo_rlshp`, `related_accession_rlshp`, `related_agents_rlshp`, `resource`, `spawned_rlshp`, `surveyed_by_rlshp`, `top_container_housed_at_rlshp`, `top_container_link_rlshp`, `top_container_profile_rlshp` | +| `restrictions_apply` | | `accession`, `archival_object` | + +<!-- NEED TO ADD the restriction field here - the resource and dig ob recs have it --> +<!-- also add the hidden field in repo and the multiple restrictions in accession --> +<!-- I think this is good to mention because these are editable via the API but also have their own endpoints. So they are a little different. Should also mention that they are bools in the API docs. --> + +## Read-Only Fields + +Several system generated, read-only fields appear across many tables. These include database identifiers, timestamps that track record creation and modification, and fields that record the username of the user that created and last modified the each record. + +| Most common read-only fields | Description | +| ------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `id` (primary key) | Database identifier for each record | +| `system_mtime` | The last time the record was modified by the system | +| `created_by` | The user that created a record | +| `last_modified_by` | The user that last modified a record | +| `user_mtime` | The time that a record was last modified by a user | +| `create_time` | The time that a record was created | +| `lock_version` | This field is incrementally updated each time a record is updated. This provides a method of tracking updates and managing near-simultaneous edits by different users. | +| `json_schema_version` | The JSON schema version | +| `aspace_relationship_position` | The position of a linked record in a list of other linked records | +| `is_slug_auto` | A boolean value that indicates whether a slug was auto-generated | +| `system_generated` | A boolean value that indicates whether a field was system-generated | +| `display_string` | A system-generated field which concatenates the title and date fields of an archival object record | + +**NOTE**: for subrecord tables these fields may hold unexpected data - because subrecords are deleted and recreated upon each save of a main or supporting record, their create and modification times will also be recreated and will not reflect the original creation date of the subrecord itself. For resource records, the timestamp only records the time that the resource itself was modified, not the last time any of its components were modified. + +<!-- ## Querying the ArchivesSpace Database --> diff --git a/src/content/docs/ja/architecture/directories.md b/src/content/docs/ja/architecture/directories.md new file mode 100644 index 0000000..8d1c026 --- /dev/null +++ b/src/content/docs/ja/architecture/directories.md @@ -0,0 +1,90 @@ +--- +title: Directory structure +description: Provides short summaries of the different directories in the ArchivesSpace codebase. +--- + +ArchivesSpace is made up of several components that are kept in separate directories. + +## \_yard + +This directory contains the code for the documentation tool used to generate the github io pages here: http://archivesspace.github.io/archivesspace/ + +## backend + +This directory contains the code that handles the database and the API. + +## build + +This directory contains the code used to build the application. It includes the commands that are used to run the development servers, the test suites, and to build the releases. ArchivesSpace is a JRuby application and Apache Ant is used to build it. + +## clustering + +This directory contains code that can be used when clustering an ArchivesSpace installation. + +## common + +This directory contains code that is used across two or more of the components. It includes configuration options, database schemas and migrations, and translation files. + +## contribution_files + +This directory contains the documentation and PDFs of the license agreement files. + +## docs + +This directory contains documentation files that are included in a release. + +## frontend + +This directory contains the staff interface Ruby on Rails application. + +## indexer + +This directory contains the indexer Sinatra application. + +## jmeter + +This directory contains an example that can be used to set up Apache JMeter to load test functional behavior and measure performance. + +## launcher + +This directory contains the code that launches (starts, restarts, and stops) an ArchivesSpace application. + +## oai + +This directory contains the OAI-PMH Sinatra application. + +## plugins + +This directory contains ArchivesSpace Program Team supported plugins. + +## proxy + +This directory contains the Docker proxy code. + +## public + +This directory contains the public interface Ruby on Rails application. + +## reports + +This directory contains the reports code. + +## scripts + +This directory contains scripts necessary for building, deploying, and other ArchivesSpace tasks. + +## selenium + +This directory contains the selenium tests. + +## solr + +This directory contains the solr code. + +## stylesheets + +This directory contains XSL stylesheets used by ArchivesSpace. + +## supervisord + +This directory contains a tool that can be used to run the development servers. diff --git a/src/content/docs/ja/architecture/frontend.md b/src/content/docs/ja/architecture/frontend.md new file mode 100644 index 0000000..50e9665 --- /dev/null +++ b/src/content/docs/ja/architecture/frontend.md @@ -0,0 +1,7 @@ +--- +title: Staff interface +--- + +This document provides an overview of the parts of the ArchivesSpace codebase which control the frontend/staff interface. For guidance on using the ArchivesSpace staff interface, consult the [ArchivesSpace Help Center](https://archivesspace.atlassian.net/wiki/spaces/ArchivesSpaceUserManual/overview) (ArchivesSpace members only). + +> Additional documentation needed diff --git a/src/content/docs/ja/architecture/index.md b/src/content/docs/ja/architecture/index.md new file mode 100644 index 0000000..786335d --- /dev/null +++ b/src/content/docs/ja/architecture/index.md @@ -0,0 +1,25 @@ +--- +title: Architecture and components +description: Abbreviated description of how the different parts of ArchivesSpace interact with each other with links to each section. +--- + +ArchivesSpace is divided into several components: the backend, which +exposes the major workflows and data types of the system via a +REST API, a staff interface, a public interface, and a search system, +consisting of Solr and an indexer application. + +These components interact by exchanging JSON data. The format of this +data is defined by a class called JSONModel. + +- [Overview](./overview) +- [JSONModel -- a validated ArchivesSpace record](./jsonmodel) +- [The ArchivesSpace backend](./backend) +- [The ArchivesSpace staff interface](./frontend) +- [Background Jobs](./jobs) +- [Search indexing](./search) +- [The ArchivesSpace public user interface](./public) +- [OAI-PMH interface](./oai-pmh) +- [API](./api) +- [Database](./database) +- [Directory structure](./directories) +- [Dependencies](./languages) diff --git a/src/content/docs/ja/architecture/jobs.md b/src/content/docs/ja/architecture/jobs.md new file mode 100644 index 0000000..5e2ef01 --- /dev/null +++ b/src/content/docs/ja/architecture/jobs.md @@ -0,0 +1,118 @@ +--- +title: Background jobs +description: Describes long running processes, called background jobs, in ArchivesSpace, as well as how they are structured using types, runners, and schemas. Additional guidance on setting jobs to run concurrently and how to add a new job type using a plugin. +--- + +ArchivesSpace provides a mechanism for long-running processes to run +asynchronously. These processes are called `Background Jobs`. + +## Managing Jobs in the Staff UI + +The `Create` menu has a `Background Job` option which shows a submenu of job +types that the current user has permission to create. (See below for more +information about job permissions and hidden jobs.) Selecting one of these +options will take the user to a form to enter any parameters required for the +job and then to create it. + +When a job is created it is placed in the `Background Job Queue`. Jobs in the +queue will be run in the order they were created. (See below for more +information about multiple threads and concurrent jobs.) + +The `Browse` menu has a `Background Jobs` option. This takes the user to a list +of jobs arranged by their status. The user can then view the details of a job, +and cancel it if they have permission. + +## Permissions + +A user must have the `create_job` permission to create a job. By default, this +permission is included in the `repository_basic_data_entry` group. + +A user must have the `cancel_job` permission to cancel a job. By default, this +permission is included in the `repository_managers` group. + +When a JobRunner registers it can specify additional create and cancel +permissions. (See below for more information) + +## Types, Runners and Schemas + +Each job has a type, and each type has a registered runner to run jobs of that +type and JSONModel schema to define its parameters. + +#### Registered JobRunners + +All jobs of a type are handled by a registered `JobRunner`. The job runner +classes are located here: + +``` +backend/app/lib/job_runners/ +``` + +It is possible to define additional job runners from a plugin. (See below for +more information about plugins.) + +A job runner class must subclass `JobRunner`, register to run one or more job +types, and implement a `#run` method for jobs that it handles. + +When a job runner registers for a job type, it can set some options: + +- `:hidden` + - Defaults to `false` + - If this is set then this job type will not be shown in the list of available job types. +- `:run_concurrently` + - Defaults to `false` + - If this is set to true then more than one job of this type can run at the same time. +- `:create_permissions` + - Defaults to `[]` + - A permission or list of permissions required, in addition to `create_job`, to create jobs of this type. +- `:cancel_permissions` + - Defaults to `[]` + - A permission or list of permissions required, in addition to `cancel_job`, to cancel jobs of this type. + +For more information about defining a job runner, see the `JobRunner` superclass: + +``` +backend/app/lib/job_runner.rb +``` + +#### JSONModel Schemas + +A job type also requires a JSONModel schema that defines the parameters to run a +job of the type. The schema name must be the same as the type that the runner +registers for. For example: + +``` +common/schemas/import_job.rb +``` + +This schema, for `JSONModel(:import_job)`, defines the parameters for running a +job of type `import_job`. + +## Concurrency + +ArchivesSpace can be configured to run more than one background job at a time. +By default, there will be two threads available to run background jobs. +The configuration looks like this: + +``` +AppConfig[:job_thread_count] = 2 +``` + +The `BackgroundJobQueue` will start this number of threads at start up. Those +threads will then poll for queued jobs and run them. + +When a job runner registers, it can set an option called `:run_concurrently`. +This is `false` by default. When set to `false` a job thread will not run a job +if there is already a job of that type running. The job will remain on the queue +and will be run when there are no longer any jobs of its type running. + +When set to `true` a job will be run when it comes to the front of the queue +regardless of whether there is a job of the same type running. + +## Plugins + +It is possible to add a new job type from a plugin. ArchivesSpace includes a +plugin that demonstrates how to do this: + +``` +plugins/jobs_example +``` diff --git a/src/content/docs/ja/architecture/jsonmodel.md b/src/content/docs/ja/architecture/jsonmodel.md new file mode 100644 index 0000000..9002c8b --- /dev/null +++ b/src/content/docs/ja/architecture/jsonmodel.md @@ -0,0 +1,103 @@ +--- +title: JSONModel +description: Describes the rules and structure behind the JSONModel class, which expresses the rules for different types of archival records. JSONModel instances are the primary data interchange mechanism for ArchivesSpace. +--- + +The ArchivesSpace system is concerned with managing a number of +different archival record types. Each record can be expressed as a +set of nested key/value pairs, and associated with each record type is +a number of rules that describe what it means for a record of that +type to be valid: + +- some fields are mandatory, some optional +- some fields can only take certain types +- some fields can only take values from a constrained set +- some fields are dependent on other fields +- some record types can be nested within other record types +- some record types may be related to others through a hierarchy +- some record types form a relationship graph with other record + types + +The JSONModel class provides a common language for expressing these +rules that all parts of the application can share. There is a +JSONModel class instance for each type of record in the system, so: + +```ruby +JSONModel(:digital_object) +``` + +is a class that knows how to take a hash of properties and make sure +those properties conform to the specification of a Digital Object: + +```ruby +JSONModel(:digital_object).from_hash(myhash) +``` + +If it passes validation, a new JSONModel(:digital_object) instance is +returned, which provides accessors for accessing its values, and +facilities for round-tripping between JSON documents and regular Ruby +hashes: + +```ruby +obj = JSONModel(:digital_object).from_hash(myhash) + +obj.title # or obj['title'] +obj.title = 'a new title' # or obj['title'] = 'a new title' + +obj.\_exceptions # Validates the object and reports any issues + +obj.to_hash # Turn the JSONModel object back into a regular hash +obj.to_json # Serialize the JSONModel object into JSON +``` + +Much of the validation performed by JSONModel is provided by the JSON +schema definitions listed in the `common/schemas` directory. JSON +schemas provide a standard way of declaring which properties a record +may and may not contain, along with their types and other +restrictions. ArchivesSpace uses these schemas to capture the +validation rules defining each record type in a declarative and +relatively self-documenting fashion. + +JSONModel instances are the primary data interchange mechanism for the +ArchivesSpace system: the API consumes and produces JSONModel +instances (in JSON format), and much of the user interface's life is +spent turning forms into JSONModel instances and shipping them off to +the backend. + +## JSONModel::Client -- A high-level API for interacting with the ArchivesSpace backend + +To save the need for a lot of HTTP request wrangling, ArchivesSpace +ships with a module called JSONModel::Client that simplifies the +common CRUD-style operations. Including this module just requires +passing an additional parameter when initializing JSONModel: + +```ruby +JSONModel::init(:client_mode => true, :url => @backend_url) +include JSONModel +``` + +If you'll be working against a single repository, it's convenient to +set it as the default for subsequent actions: + +```ruby +JSONModel.set_repository(123) +``` + +Then, several additional JSONModel methods are available: + +```ruby +# As before, get a paginated list of accessions (GET) +JSONModel(:accession).all(:page => 1) + +# Create a new accession (POST) +obj = JSONModel(:accession).from_hash(:title => "A new accession", ...) +obj.save + +# Get a single accession by ID (GET) +obj = JSONModel(:accession).find(123) + +# Update an existing accession (POST) +obj = JSONModel(:accession).find(123) +obj.title = "Updated title" +obj.save +``` diff --git a/src/content/docs/ja/architecture/languages.md b/src/content/docs/ja/architecture/languages.md new file mode 100644 index 0000000..e36d138 --- /dev/null +++ b/src/content/docs/ja/architecture/languages.md @@ -0,0 +1,18 @@ +--- +title: Dependencies +description: Lists the technical stack of the application, including programming languages and platforms. +--- + +ArchivesSpace components are constructed using several programming languages, platforms, and additional open source projects. + +## Languages + +The languages used are Java, JRuby, Ruby, JavaScript, and CSS. + +## Platforms + +The backend, OAI harvester, and indexer are Sinatra apps. The staff and public user interfaces are Ruby on Rails apps. + +## Additional open source projects + +The database used out of the box and for testing is Apache Derby. The database suggested for production is MySQL. The index platform is Apache Solr. diff --git a/src/content/docs/ja/architecture/oai-pmh.md b/src/content/docs/ja/architecture/oai-pmh.md new file mode 100644 index 0000000..b538aa3 --- /dev/null +++ b/src/content/docs/ja/architecture/oai-pmh.md @@ -0,0 +1,130 @@ +--- +title: OAI-PMH interface +description: Describes how OAI-PMH is set up in ArchivesSpace and how to harvest data using OAI-PMH with example links and additional information. +--- + +A starter OAI-PMH interface for ArchivesSpace allowing other systems to harvest +your records is included in version 2.1.0. Additional features and functionality +will be added in later releases. + +By default, the OAI-PMH interface runs on port 8082. A sample request page is +available at http://localhost:8082/sample. (To access it, make sure that you +have set the AppConfig[:oai_proxy_url] appropriately.) + +The system provides responses to a number of standard OAI-PMH requests, +including GetRecord, Identify, ListIdentifiers, ListMetadataFormats, +ListRecords, and ListSets. Unpublished and suppressed records and elements are +not included in any of the OAI-PMH responses. + +Some responses require the URL parameter metadataPrefix. There are five +different metadata responses available: + +- EAD -- oai_ead (resources in EAD) +- Dublin Core -- oai_dc (archival objects and resources in Dublin Core) +- extended DCMI Terms -- oai_dcterms (archival objects and resources in DCMI Metadata Terms format) +- MARC -- oai_marc (archival objects and resources in MARC) +- MODS -- oai_mods (archival objects and resources in MODS) + +The EAD response for resources and MARC response for resources and archival +objects use the mappings from the built-in exporter for resources. The DC, +DCMI terms, and MODS responses for resources and archival objects use mappings +suggested by the community. + +Here are some example URLs and other information for these requests: + +**GetRecord** – needs a record identifier and metadataPrefix +Up to ArchivesSpace v3.5.1 OAI identifiers are in this format: + +`http://localhost:8082/oai?verb=GetRecord&identifier=oai:archivesspace//repositories/2/resources/138&metadataPrefix=oai_ead` + +Starting with ArchivesSpace v4.0.0 OAI identifiers are in the new format (notice the colon after the `oai:archivesspace` namespace part of the identifier): + +`http://localhost:8082/oai?verb=GetRecord&identifier=oai:archivesspace:/repositories/2/resources/138&metadataPrefix=oai_ead` + +see also: https://github.com/code4lib/ruby-oai/releases/tag/v1.0.0 + +**Identify** + +`http://localhost:8082/oai?verb=Identify` + +**ListIdentifiers** – needs a metadataPrefix + +`http://localhost:8082/oai?verb=ListIdentifiers&metadataPrefix=oai_dc` + +**ListMetadataFormats** + +`http://localhost:8082/oai?verb=ListMetadataFormats` + +**ListRecords** – needs a metadataPrefix + +`http://localhost:8082/oai?verb=ListRecords&metadataPrefix=oai_dcterms` + +**ListSets** + +`http://localhost:8082/oai?verb=ListSets` + +Harvesting the ArchivesSpace OAI-PMH server without specifying a set will yield +all published records across all repositories. +Predefined sets can be accessed using the set parameter. In order to retrieve +records from sets, include a set parameter in the URL and the DC metadataPrefix, +such as "&set=collection&metadataPrefix=oai_dc". These sets can be from +configured sets as shown above or from the following levels of description: + +- Class -- class +- Collection -- collection +- File -- file +- Fonds -- fonds +- Item -- item +- Other_Level -- otherlevel +- Record_Group -- recordgrp +- Series -- series +- Sub-Fonds -- subfonds +- Sub-Group -- subgrp +- Sub-Series -- subseries + +In addition to the sets based on level of description, you can define sets +based on repository codes and/or sponsors in the config/config.rb file: + +```ruby +AppConfig[:oai_sets] = { + 'repository_set' => { + :repo_codes => ['hello626'], + :description => "A set of one or more repositories", + }, + 'sponsor_set' => { + :sponsors => ['The_Sponsor'], + :description => "A set of one or more sponsors", + } +} +``` + +The interface implements resumption tokens for pagination of results. As an +example, the following URL format should be used to page through the results +from a ListRecords request: + +`http://localhost:8082/oai?verb=ListRecords&metadataPrefix=oai_ead` + +using the resumption token: + +`http://localhost:8082/oai?verb=ListRecords&resumptionToken=eyJtZXRhZGF0YV9wcmVmaXgiOiJvYWlfZWFkIiwiZnJvbSI6IjE5NzAtMDEtMDEgMDA6MDA6MDAgVVRDIiwidW50aWwiOiIyMDE3LTA3LTA2IDE3OjEwOjQxIFVUQyIsInN0YXRlIjoicHJvZHVjaW5nX3JlY29yZHMiLCJsYXN0X2RlbGV0ZV9pZCI6MCwicmVtYWluaW5nX3R5cGVzIjp7IlJlc291cmNlIjoxfSwiaXNzdWVfdGltZSI6MTQ5OTM2MTA0Mjc0OX0=` + +Note: you do not use the metadataPrefix when you use the resumptionToken + +The ArchivesSpace OAI-PMH server supports persistent deletes, so harvesters +will be notified of any records that were deleted since +they last harvested. + +Mixed content is removed from Dublin Core, dcterms, MARC, and MODS field outputs +in the OAI-PMH response (e.g., a scope note mapped to a DC description field +would not include `<p>`, `<abbr>`, `<address>`, `<archref>`, `<bibref>`, `<blockquote>`, +`<chronlist>`, `<corpname>`, `<date>`, `<emph>`, `<expan>`, `<extptr>`, `<extref>`, +`<famname>`, `<function>`, `<genreform>`, `<geogname>`, `<lb>`, `<linkgrp>`, `<list>`, +`<name>`, `<note>`, `<num>`, `<occupation>`, `<origination>`, `<persname>`, `<ptr>`, `<ref>`, `<repository>`, `<subject>`, `<table>`, `<title>`, `<unitdate>`, `<unittitle>`). + +The component level records include inherited data from superior hierarchical +levels of the finding aid. Element inheritance is determined by institutional +system configuration (editable in the config/config.rb file) as implemented for +the Public User Interface. + +ARKs have not yet been implemented, pending more discussion of how they should +be formulated. diff --git a/src/content/docs/ja/architecture/overview.md b/src/content/docs/ja/architecture/overview.md new file mode 100644 index 0000000..b4a7375 --- /dev/null +++ b/src/content/docs/ja/architecture/overview.md @@ -0,0 +1,15 @@ +--- +title: Architecture Overview +description: The main components of ArchivesSpace and how they interact with each other and the end users. +--- + +ArchivesSpace is divided into several components: + +- the backend, which exposes the major workflows and data types of the system via a REST API, +- a staff interface, +- a public interface, +- a search system, consisting of Solr and an indexer application. + +These components interact by exchanging JSON data. The format of this data is defined by a class called JSONModel. + +![archivesspace_architecture](./archivesspace_architecture.svg) diff --git a/src/content/docs/ja/architecture/public.md b/src/content/docs/ja/architecture/public.md new file mode 100644 index 0000000..aa6419d --- /dev/null +++ b/src/content/docs/ja/architecture/public.md @@ -0,0 +1,154 @@ +--- +title: Public user interface +description: Directions for configuration options for the ArchivesSpace Public User Interface, as well as explanation on inheritance of data in records. +--- + +The ArchivesSpace Public User Interface (PUI) provides a public +interface to your ArchivesSpace collections. In a default +ArchivesSpace installation it runs on port `:8081`. + +## Configuration + +The PUI is configured using the standard ArchivesSpace `config.rb` +file, with the relevant configuration options are prefixed with +`:pui_`. + +To see the full list of available options, see the file +[`https://github.com/archivesspace/archivesspace/blob/master/common/config/config-defaults.rb`](https://github.com/archivesspace/archivesspace/blob/master/common/config/config-defaults.rb) + +### Preserving Patron Privacy + +The **:block_referrer** key in the configuration file (default: **true**) determines whether the full referring URL is +transmitted when the user clicks a link to a website outside the web domain of your instance of ArchivesSpace. This +protects your patrons from tracking by that site. + +### Main Navigation Menu + +You can choose not to display one or more of the links on the main +(horizontal) navigation menu, either globally or by repository, if you +have more than one repository. You manage this through the +`:pui_hide` options in the `config/config.rb` file. + +### Repository Customization + +#### Display of "badges" on the Repository page + +You can configure which badges appear on the Repository page, both +globally or by repository. See the `:pui_hide` configuration options +for these too. + +### Activation of the "Request" button on archival object pages + +You can configure, both globally or by repository, whether the +"Request" button is active on archival object pages for objects that +don't have an associated Top Container. See the +`:pui_requests_permitted_for_containers_only` configuration option to +modify this. + +### I18n + +You can change the text and labels used by the PUI by editing the +locale files under the `locales/public` directory of your +ArchivesSpace distribution. + +### Addition of a "lead paragraph" + +You can also use the custom `.yml` files, described above, to add a +custom "lead paragraph" (including html markup) for one or more of +your repositories, keyed to the repository's code. + +For example, if your repository, `My Wonderful Repository` has a code of `MWR`, this is what you might see in the +custom `en.yml`: + +```yaml +en: + repos: + mwr: + lead_graph: This <strong>amazing</strong> repository has so much to offer you! +``` + +## Development + +To run a development server, the PUI follows the same pattern as the rest of ArchivesSpace. From your ArchivesSpace checkout: + +```shell + # Prepare all dependencies + build/run bootstrap + + # Run the backend development server (and Solr) + build/run backend:devserver + + # Run the indexer + build/run indexer + + # Finally, run the PUI itself + build/run public:devserver +``` + +## Inheritance + +### Three options for inheritance: + +- Directly inherit a value for a field – the record has no value for the field and you want the value in the field to display as if it were the record’s own [uncomment the inheritance section in the config, set desired field (property) to inherit_directly => true] +- Indirectly inherit a value for a field – the record has no value for the field and you want to display the value from a higher level, but precede it with a note that indicates that it comes from that higher level, such as "From the collection" [uncomment the inheritance section in the config, set desired field (property) to inherit_directly => false] +- Don’t display the field at all – the record has no value of its own for the field and you don’t want it to display at all [uncomment the inheritance section in the config, delete the lines for the desired field (property)] + +### Archival Inheritance + +With the new version of the Public Interface, all elements of description can be inherited. This is especially important since the PUI displays each level of description as its own webpage. + +Each element of description can be inherited either directly or indirectly. When an element is inherited directly, it will appear as if that element was attached directly to that archival object in the staff interface. When an element is inherited indirectly, it will appear on the lower-level of description in the public interface, but the inherited element will be preceded with a note indicating the level of the ancestor from which the note is inherited (e.g. From the Collection, or From the Sub-Series). In both cases, the element will only be inherited if it is missing from the archival object. Additionally, the element of description will only be inherited from the closest ancestor. In other words, if a top-level collection record has an access restrictions note, and a child-level series record has an an access restrictions note, but the sub-series child of that series record lacks an access restrictions note, then the sub-series record will inherit only the access restrictions note from its parent series record. + +Additionally, the identifier element in ArchivesSpace, which is better known as the Reference Code in ISAD-G and DACS, can be inherited in a composite manner. When inherited in a composite manner, the inherited elements will be concatenated together. In other words, an identifier at the item level could look like this: MSS 1. Series A. Item 1. Though the archival object has an identifier of "Item 1", a composite identifier is displayed since the series-level record to which the item belongs has an identifier of "Series A", which in turn also belongs to a collection-level record that has an identifier of "MSS 1". + +By default, the following elements are turned on for inheritance: + +- Title (direct inheritance) +- Identifier (indirect inheritance), but by default the identifier inherits from ancestor archival objects only; it does NOT include the resource identifier. + +Also, it is advised to inherit this element in a composite fashion once it is determined whether the level of description should or should not display as part of the identifier, which will depend upon local data-entry practices + +- Language code (direct inheritance, but it does NOT display anywhere in the interface currently; eventually, this could be used for faceting) +- Dates (direct inheritance) +- Extents (indirect inheritance) +- Creator (indirect inheritance) +- Access restrictions note (direct inheritance) +- Scope and contents note (indirect inheritance) +- Language of Materials note (indirect inheritance, but there seems to be a bug right now so that the Language notes always show up as being directly inherited. See AR-XXXX) + +See https://github.com/archivesspace/archivesspace/blob/master/common/config/config-defaults.rb#L296-L396 for more information and examples. + +Also, a video overview of this feature, which was recorded before development was finished, is available online: +https://vimeo.com/195457286 + +Composite Identifier Inheritance + +If you add the following three lines to your configuration file, re-start ArchivesSpace, and then let the indexer re-index your records, you can gain the benefit of composite identifiers: + +```ruby +AppConfig[:record_inheritance][:archival_object][:composite_identifiers] = { +:include_level => true, +:identifier_delimiter => '. ' +} +``` + +To add extra fields, such as subjects you can add the following: + +```ruby +inherited_fields_extras = [ + { + code: 'subjects', + property: 'subjects', + inherit_if: proc { |json| json.select { |j| true } }, + inherit_directly: false, + }, +] +``` + +When you set include_level to true, that means the archival object level will be included in the identifier so that you don't have to repeat that data. For example, if the level of description is "Series" and the archival object identifier is "1", and the parent resource identifier is "MSS 1", then the composite identifier would display as "MSS 1. Series 1" at the series 1 level, and any descendant record. If you set include_level to false, then the display would be "MSS 1. 1" + +### License + +ArchivesSpace is released under the [Educational Community License, +version 2.0](http://opensource.org/licenses/ecl2.php). See the +[COPYING](https://github.com/archivesspace/archivesspace/blob/master/COPYING) file for more information. diff --git a/src/content/docs/ja/architecture/search.md b/src/content/docs/ja/architecture/search.md new file mode 100644 index 0000000..6320831 --- /dev/null +++ b/src/content/docs/ja/architecture/search.md @@ -0,0 +1,46 @@ +--- +title: Search indexing +description: Explanation of how ArchivesSpace uses Solr for indexing added/updated/deleted records and the differences between the periodic and real-time modes of indexing records. +--- + +The ArchivesSpace system uses Solr for its full-text search. As +records are added/updated/deleted by the backend, the corresponding +changes are made to the Solr index to keep them (roughly) +synchronized. + +Keeping the backend and Solr in sync is the job of the "indexer", a +separate process that runs in the background and watches for record +updates. The indexer operates in two modes simultaneously: + +- The periodic mode polls the backend to get a list of records that + were added/modified/deleted since it last checked. These changes + are propagated to the Solr index. This generally happens every 30 + to 60 seconds (and is configurable). +- The real-time mode responds to updates as they happen, applying + changes to Solr as soon as they're applied to the backend. This + aims to reflect updates within the search indexes in milliseconds + or seconds. + +The two modes of operation overlap somewhat, but they serve different +purposes. The periodic mode ensures that records are never missed due +to transient failures, and will bring the indexes up to date even if +the indexer hasn't run for quite some time--even creating them from +scratch if necessary. This mode is also used for indexing updates +made by bulk import processes and other updates that don't need to be +reflected in the indexes immediately. + +The real-time indexer mode attempts to apply updates to the index much +more quickly. Rather than polling, it performs a `GET` request +against the `/update-feed` endpoint of the backend. This endpoint +returns any records that were updated since the last time it was asked +and, most importantly, it leaves the request hanging if no records +have changed. + +By calling this endpoint in a loop, the real-time indexer spends most +of its time sitting around waiting for something to happen. The +moment a record is updated, the already-pending request to the +`/update-feed` endpoint yields the updated record, which is sent to +Solr and indexed immediately. This avoids the delays associated with +polling and keeps indexing latency low where it matters. For example, +newly created records should appear in the browse list by the time a +user views it. diff --git a/src/content/docs/ja/customization/authentication.md b/src/content/docs/ja/customization/authentication.md new file mode 100644 index 0000000..e68959a --- /dev/null +++ b/src/content/docs/ja/customization/authentication.md @@ -0,0 +1,139 @@ +--- +title: Additional authentication +description: Instructions on how to install and configure a custom authentication handler via a plugin. +--- + +ArchivesSpace supports LDAP-based authentication out of the box, but you can +authenticate against other password-based user directories by defining your own +authentication handler, creating a plug-in, and configuring your ArchivesSpace +instance to use it. If you would rather not have to create your own handler, +there is a [plugin](https://github.com/lyrasis/aspace-oauth) available that uses OAUTH user authentication that you can add +to your ArchivesSpace installation. + +## Creating a new authentication handler class to use in a plug-in + +An authentication handler is just a class that implements a couple of +key methods: + +- `initialize(opts)` -- An object constructor which receives the + configuration block specified in the system's configuration. +- `name` -- A zero-argument method which just returns a string that + identifies the instance of your handler. The format of this + string isn't important: it just gets stored as a user attribute + (in the ArchivesSpace database) to make it possible to tell which + authentication source a user last successfully authenticated + against. +- `authenticate(username, password)` -- a method which checks + whether `password` is the correct password for `username`. If the + password is correct, returns an instance of `JSONModel(:user)`. + Otherwise, returns `nil`. + +A new instance of your handler will be created for each login attempt, +so there's no need to handle concurrency in your implementation. + +Your `authenticate` method can do whatever is required to check that +the provided password is correct, with the only constraint being that +it must return either `nil` or a `JSONModel(:user)` instance. + +The `JSONModel(:user)` class (whose JSON schema is defined in +`common/schemas/user.rb`) defines the set of properties that the +system needs for a user. When you return a `JSONModel(:user)` object, +its values will be used to create an ArchivesSpace user (if a user by +that name didn't exist already), or update the existing user (if they +were already known). + +**Note**: `The JSONModel(:user)` class validates the values you give it +against its JSON schema and throws an `JSONModel::ValidationException` +if anything isn't right. If this happens within your handler, the +exception will be logged and the authentication request will fail. + +### A skeleton implementation + +Suppose you already have a database with a table containing users that +should be able to log in to ArchivesSpace. Below is a sketch of an +authentication handler that will connect to this database and use it +for authentication. + +```ruby +# For this example we'll use the Sequel database toolkit. Note that +# this isn't necessary--you could use whatever database library you +# like here. +require 'sequel' + +class MyDatabaseAuth + + # For easy access to the JSONModel(:user) class + include JSONModel + + + def initialize(definition) + # Store the database connection details for use at + # authentication time. + @db_url = definition[:db_url] or raise "Need a value for :db_url" + end + + + # Just for informational purposes. Return a string containing our + # database URL. + def name + "MyDatabaseAuth - #{@db_url}" + end + + + def authenticate(username, password) + # Open a connection to the database + Sequel.connect(@db_url) do |db| + + # Check whether we have an entry for the given username + # and password in our database's "users" table + user = db[:users].filter(:username => username, + :password => password). + first + + if !user + # The user couldn't be found, or their password was wrong. + # Authentication failed. + return nil + end + + # Build and return a JSONModel(:user) instance from fields in the database + JSONModel(:user).from_hash(:username => username, + :name => user[:user_full_name]) + + end + end + +end +``` + +In order to use your new authentication handler, you'll need to add it to the plug-in +architecture in ArchivesSpace and enable it. Create a new directory, called our_auth +perhaps, in the plugins directory of your ArchivesSpace installation. Inside +that directory create this directory hierarchy `backend/model/` and place the +new class file there. Next, configure the new handler. + +## Modifying your configuration + +To have ArchivesSpace invoke your new authentication handler, just add +a new entry to the `:authentication_sources` configuration block in the +`config/config.rb` file. + +A configuration for the above example might be as follows: + +```ruby +AppConfig[:authentication_sources] = [{ + :model => 'MyDatabaseAuth', + :db_url => 'jdbc:mysql://localhost:3306/somedb?user=myuser&password=mypassword', + }] +``` + +## Add the plug-in to the list of plug-ins already enabled + +In the `config/config.rb` file, find the setting of AppConfig[:plugins] and add +a reference to the new plug-in there. For example, if you named it our_auth, the +AppConfig[:plugins] setting may look something like this: + +AppConfig[:plugins] = ['local', 'hello_world', 'our_auth'] + +Restart your ArchivesSpace installation and you should now see authentication +requests hitting your new handler. diff --git a/src/content/docs/ja/customization/bower.md b/src/content/docs/ja/customization/bower.md new file mode 100644 index 0000000..1197f7f --- /dev/null +++ b/src/content/docs/ja/customization/bower.md @@ -0,0 +1,68 @@ +--- +title: Managing frontend assets with Bower +description: Instructions on how to add static assests to the core project. +--- + +This is aimed at developers and applies to the 'frontend' application only. + +If you wish to add static assets to the core project (i.e., javascript, css, +less files) please use `bower` to add and install them so we know what's what +and when to upgrade. + +If you wish to do a good deed for ArchivesSpace you can track down the source +of any vendor assets not included in bower.json and get them updated and +installed according to this protocol. + +## General Setup + +### Step 1: install npm + +On OSX, for example: + +```shell +brew install npm +``` + +### Step 2: install Bower + +```shell +npm install bower -g +``` + +### Step 3: install components + +```shell +bower install +``` + +## Adding a static asset to ASpace Frontend (Staff UI) + +### Step 1: add the component + +```shell +bower install <PACKAGE NAME> --save +``` + +### Step 2: map Bower > Rails + + Edit the bower.json file to map the assets you want from bower_components + to assets. See examples in bower.json + This is kind of a hack to workaround: + https://github.com/blittle/bower-installer/issues/75 + +### Step 3: Install assets + +```shell +alias npm-exec='PATH=$(npm bin):$PATH' +npm-exec bower-installer +``` + +### Step 4: Check assets in + +Check the installed assets into Git. We version control bower.json and the +installed files, but not the bower_components directory. + +### Production! + +Don't forget - if you are adding assets that don't have a .js extension, you +need to add them to frontend/config/environments/production.rb diff --git a/src/content/docs/ja/customization/configuration.md b/src/content/docs/ja/customization/configuration.md new file mode 100644 index 0000000..ef98c89 --- /dev/null +++ b/src/content/docs/ja/customization/configuration.md @@ -0,0 +1,1249 @@ +--- +title: Configuration +description: Lists all available configuration options available within the config/config.rb file, including configuration names, values, and suggestions for setup. +--- + +The primary configuration for ArchivesSpace is done in the config/config.rb +file. By default, this file contains the default settings, which are indicated +by commented out lines ( indicated by the "#" in the file ). You can adjust these +settings by adding new lines that change the default and restarting +ArchivesSpace. Be sure that your new settings are not commented out +( i.e. do NOT start with a "#" ), otherwise the settings will not take effect. + +## Commonly changed settings + +### Database config + +#### :db_url + +Set your database name and credentials. The default specifies that the embedded database should be used. +It is recommended to use a MySQL database instead of the embedded database. +For more info, see [Using MySQL](/provisioning/mysql) + +This is an example of specifying MySQL credentials: + +`AppConfig[:db_url] = "jdbc:mysql://127.0.0.1:3306/aspace?useUnicode=true&characterEncoding=UTF-8&user=as&password=as123"` + +#### :db_max_connections + +Set the maximum number of database connections used by the application. +Default is derived from the number of indexer threads. + +`AppConfig[:db_max_connections] = proc { 20 + (AppConfig[:indexer_thread_count] * 2) }` + +### URLs for ArchivesSpace components + +Set the ArchivesSpace backend port. The backend listens on port 8089 by default. + +`AppConfig[:backend_url] = "http://localhost:8089"` + +Set the ArchivesSpace staff interface (frontend) port. The staff interface listens on port 8080 by default. + +`AppConfig[:frontend_url] = "http://localhost:8080"` + +Set the ArchivesSpace public interface port. The public interface listens on port 8081 by default. + +`AppConfig[:public_url] = "http://localhost:8081"` + +Set the ArchivesSpace OAI server port. The OAI server listens on port 8082 by default. + +`AppConfig[:oai_url] = "http://localhost:8082"` + +Set the ArchivesSpace Solr index port. The Solr server listens on port 8090 by default. + +`AppConfig[:solr_url] = "http://localhost:8090"` + +Set the ArchivesSpace indexer port. The indexer listens on port 8091 by default. + +`AppConfig[:indexer_url] = "http://localhost:8091"` + +Set the ArchivesSpace API documentation port. The API documentation listens on port 8888 by default. + +`AppConfig[:docs_url] = "http://localhost:8888"` + +### Enabling ArchivesSpace components + +Enable or disable specific componenets by setting the following settings to true or false (defaults to true): + +```ruby +AppConfig[:enable_backend] = true +AppConfig[:enable_frontend] = true +AppConfig[:enable_public] = true +AppConfig[:enable_solr] = true +AppConfig[:enable_indexer] = true +AppConfig[:enable_docs] = true +AppConfig[:enable_oai] = true +``` + +### Application logging + +By default, all logging will be output on the screen while the archivesspace command +is running. When running as a daemon/service, this is put into a file in +`logs/archivesspace.out`. You can route log output to a different file per component by changing the log value to +a filepath that archivesspace has write access to. + +You can also set the logging level for each component. Valid values are: + +- `debug` (everything) +- `info` +- `warn` +- `error` +- `fatal` (severe only) + +#### `AppConfig[:frontend_log]` + +File for log output for the frontend (staff interface). Set to "default" to +route log output to archivesspace.out. + +#### `#AppConfig[:frontend_log_level]` + +Logging level for the frontend. + +#### `AppConfig[:backend_log]` + +File for log output for the backend. Set to "default" to +route log output to archivesspace.out. + +#### `#AppConfig[:backend_log_level]` + +Logging level for the backend. + +#### `AppConfig[:pui_log]` + +File for log output for the public UI. Set to "default" to +route log output to archivesspace.out. + +#### `#AppConfig[:pui_log_level]` + +Logging level for the public UI. + +#### `AppConfig[:indexer_log]` + +File for log output for the indexer. Set to "default" to +route log output to archivesspace.out. + +#### `#AppConfig[:indexer_log_level]` + +Logging level for the indexer. + +### Database logging + +#### `AppConfig[:db_debug_log]` + +Set to true to log all SQL statements. +Note that this will have a performance impact! + +`AppConfig[:db_debug_log] = false` + +#### `AppConfig[:mysql_binlog]` + +Set to true if you have enabled MySQL binary logging. + +`AppConfig[:mysql_binlog] = false` + +### Solr backups + +#### `AppConfig[:solr_backup_schedule]` + +Set Solr back up schedule. By default, Solr backups will run at midnight. See https://crontab.guru/ for +information about the schedule syntax. + +`AppConfig[:solr_backup_schedule] = "0 * * * *"` + +#### `AppConfig[:solr_backup_number_to_keep]` + +Number of Solr backups to keep (default = 1) + +`AppConfig[:solr_backup_number_to_keep] = 1` + +#### `AppConfig[:solr_backup_directory]` + +Directory to store Solr backups. + +`AppConfig[:solr_backup_directory] = proc { File.join(AppConfig[:data_directory], "solr_backups") }` + +### Default Solr params + +#### `AppConfig[:solr_params]` + +Add default solr params. + +A simple example: use AND for search: + +`AppConfig[:solr_params] = { "q.op" => "AND" }` + +A more complex example: set the boost query value (bq) to boost the relevancy +for the query string in the title, set the phrase fields parameter (pf) to boost +the relevancy for the title when the query terms are in close proximity to each +other, and set the phrase slop (ps) parameter for the pf parameter to indicate +how close the proximity should be: + +```ruby +AppConfig[:solr_params] = { + "bq" => proc { "title:\"#{@query_string}\"*" }, + "pf" => 'title^10', + "ps" => 0, +} +``` + +### Language + +#### `AppConfig[:locale]` + +Set the application's language (see the .yml files in +https://github.com/archivesspace/archivesspace/tree/master/common/locales +for a list of available locale codes). Default is English (:en): + +`AppConfig[:locale] = :en` + +### Plugin registration + +#### `AppConfig[:plugins]` + +Plug-ins to load. They will load in the order specified. + +`AppConfig[:plugins] = ['local', 'lcnaf']` + +### Thread count + +#### `AppConfig[:job_thread_count]` + +The number of concurrent threads available to run background jobs. +Introduced because long running jobs were blocking the queue. +Resist the urge to set this to a big number! + +`AppConfig[:job_thread_count] = 2` + +### OAI configuration options + +**NOTE: As of version 2.5.2, the following parameters (oai_repository_name, oai_record_prefix, and oai_admin_email) have been deprecated. They should be set in the Staff User Interface. To set them, select the System menu in the Staff User Interface and then select Manage OAI-PMH Settings. These three settings are at the top of the page in the General Settings section. These settings will be completely removed from the config file when version 2.6.0 is released.** + +#### `AppConfig[:oai_repository_name]` + +`AppConfig[:oai_repository_name] = 'ArchivesSpace OAI Provider'` + +#### `AppConfig[:oai_record_prefix]` + +`AppConfig[:oai_record_prefix] = 'oai:archivesspace'` + +#### `AppConfig[:oai_admin_email]` + +`AppConfig[:oai_admin_email] = 'admin@example.com'` + +#### `AppConfig[:oai_sets]` + +In addition to the sets based on level of description, you can define OAI Sets +based on repository codes and/or sponsors as follows: + +```ruby +AppConfig[:oai_sets] = { + 'repository_set' => { + :repo_codes => ['hello626'], + :description => "A set of one or more repositories", + }, + + 'sponsor_set' => { + :sponsors => ['The_Sponsor'], + :description => "A set of one or more sponsors", + }, +} +``` + +## Other less commonly changed settings + +### Default admin password + +#### `AppConfig[:default_admin_password]` + +Set default admin password. Default password is "admin". + +`#AppConfig[:default_admin_password] = "admin"` + +### Data directories + +#### `AppConfig[:data_directory]` + +If you run ArchivesSpace using the standard scripts (archivesspace.sh, +archivesspace.bat or as a Windows service), the value of :data_directory is +automatically set to be the "data" directory of your ArchivesSpace +distribution. You don't need to change this value unless you specifically +want ArchivesSpace to put its data files elsewhere. + +`AppConfig[:data_directory] = File.join(Dir.home, "ArchivesSpace")` + +#### `AppConfig[:backup_directory]` + +Directory to store automated backups when using the embedded demo database (Apache Derby instead of MySQL). This defaults to `demo_db_backups` within the `data` directory. + +`AppConfig[:backup_directory] = proc { File.join(AppConfig[:data_directory], "demo_db_backups") }` + +### Solr defaults + +#### `AppConfig[:solr_indexing_frequency_seconds]` + +The number of seconds between each run of the SUI and PUI indexers. The indexers will perform and indexing cycle every configured number of seconds. + +`AppConfig[:solr_indexing_frequency_seconds] = 30` + +#### `AppConfig[:solr_facet_limit]` + +The maximum number of distinct facet terms Solr will include in the response for a given field. + +`AppConfig[:solr_facet_limit] = 100` + +#### `AppConfig[:default_page_size]` + +The number of records included in each page in all paginated backend api responses. +`AppConfig[:default_page_size] = 10` + +#### `AppConfig[:max_page_size]` + +Requests to the backend api can define a custom page_size param. This is the maximum allowed page size. +`AppConfig[:max_page_size] = 250` + +### Cookie prefix + +#### `AppConfig[:cookie_prefix]` + +A prefix added to cookies used by the application. +Change this if you're running more than one instance of ArchivesSpace on the +same hostname (i.e. multiple instances on different ports). +Default is "archivesspace". + +`AppConfig[:cookie_prefix] = "archivesspace"` + +### SUI Indexer settings + +The size of each batch of records passed to each indexer worker-thread to process and push to solr. +The periodic indexer can run using multiple threads to take advantage of +multiple CPU cores. By setting these two options, you can control how many +CPU cores are used, and the amount of memory that will be consumed by the +indexing process (more cores and/or more records per thread means more memory used). + +#### `AppConfig[:indexer_records_per_thread]` + +The size of each batch of records passed to each indexer worker-thread to process and push to solr. More records per thread means that more memory will be used by the indexer process. +`AppConfig[:indexer_records_per_thread] = 25` + +#### `AppConfig[:indexer_thread_count]` + +The number of worker-thread to be used by the SUI indexer. More worker-threads means that more CPU cores will be used. +`AppConfig[:indexer_thread_count] = 4` + +#### `AppConfig[:indexer_solr_timeout_seconds]` + +The indexer is making requests to solr in order to push updated records to the solr index. This is the maximum number of seconds that the indexer will wait for solr to respond to a request. + +`AppConfig[:indexer_solr_timeout_seconds] = 300` + +### PUI Indexer Settings + +#### `AppConfig[:pui_indexer_enabled]` + +If false no pui indexer is started. Set to false if not using the PUI at all. +`AppConfig[:pui_indexer_enabled] = true` + +#### `AppConfig[:pui_indexing_frequency_seconds]` + +The number of seconds between each run of the PUI indexer. The indexer will perform and indexing cycle every configured number of seconds. +`AppConfig[:pui_indexing_frequency_seconds] = 30` + +#### `AppConfig[:pui_indexer_records_per_thread]` + +The size of each batch of records passed to each indexer worker-thread to process and push to solr. +The PUI indexer can run using multiple threads to take advantage of +multiple CPU cores. By setting these two options, you can control how many +CPU cores are used, and the amount of memory that will be consumed by the +indexing process (more cores and/or more records per thread means more memory used). + +`AppConfig[:pui_indexer_records_per_thread] = 25` + +#### `AppConfig[:pui_indexer_thread_count]` + +The number of worker-thread to be used by the PUI indexer. More worker-threads means that more CPU cores will be used. +`AppConfig[:pui_indexer_thread_count] = 1` + +### Index state + +#### `AppConfig[:index_state_class]` + +The indexer needs a place to store it's state (keep track of which records have already been indexed). +Set to 'IndexState' (default) to store the state in the local `data` directory. +Set to 'IndexStateS3' (optional) to store the state in an AWS S3 bucket in the Amazon Cloud. + +`AppConfig[:index_state_class] = 'IndexState'` + +#### `AppConfig[:index_state_s3]` - Relevant only when using S3 storage for the indexer state + +If using S3 storage for the indexer state in amazon s3 (optional), you need to configure the access to S3. + +NOTE: S3 charges for read / update requests and the pui indexer is continually +writing to state files so you may want to increase `pui_indexing_frequency_seconds` and `solr_indexing_frequency_seconds` + +##### Configuring S3 access using environment variables (default) + +By default, the S3 configuration is fetched from the following shell environment variables: + +- `AWS_REGION` +- `AWS_ACCESS_KEY_ID` +- `AWS_SECRET_ACCESS_KEY` +- `AWS_ASPACE_BUCKET` + +It is using the `:cookie_prefix` configuration as a prefix for the state files stored in the bucket - usefull when using the same bucket to store indexer state of multiple archivesspace instances. + +##### Configuring S3 access using AppConfig variable in the `config.rb` file + +```ruby +AppConfig[:index_state_s3] = { + region: "us-east-1", + aws_access_key_id: "ASIAXXXXEXAMPLEID", + aws_secret_access_key: "xXxxXXxxXX/XXXXXX/XXXXXXXEXAMPLEKEY", + bucket: ENV.fetch("my-as-test-bucket"), + prefix: proc { "#{AppConfig[:cookie_prefix]}_" }, +} +``` + +You can use `prefix: "some random string"` instead of the above code that used the `:cookie_prefix` AppConfig variable. + +### Misc. database options + +#### `AppConfig[:allow_other_unmapped]` + +Allow assigning the special enumeration value `other_unmapped` for dynamic enum (controlled value) fields. When set to `true` `other_unmapped` is treated as a valid value for all enumeration (controlled value) fields. The `other_unmapped` value is added as a possible value for all controlled value lists. +This feature is designed for handling unmapped or unknown enumeration values, eventually useful during data migrations where source data may have values not yet defined in controlled value lists, or generally importing external data that uses values that are not already defined in a controlled value list. + +`AppConfig[:allow_other_unmapped] = false` + +#### `AppConfig[:db_url_redacted]` + +This is how the database url (which includes the database username and password) will appear in the logs. The default replaces the username and password with `REDACTED`, so that: +`"user=john&password=secret123"` +becomes +`"user=[REDACTED]&password=[REDACTED]"` + +`AppConfig[:db_url_redacted] = proc { AppConfig[:db_url].gsub(/(user|password)=(.*?)(&|$)/, '\1=[REDACTED]\3') }` + +#### `AppConfig[:demo_db_backup_schedule]` + +When using the embedded demo database (Apache Derby instead of MySQL) this is the schedule of the automated backups, in cron format. By default, it is at 4AM every day. + +`AppConfig[:demo_db_backup_schedule] = "0 4 * * *"` + +#### `AppConfig[:demo_db_backup_number_to_keep] = 7` + +How many backups to keep available when using the embedded demo database + +`AppConfig[:demo_db_backup_number_to_keep] = 7` + +#### `AppConfig[:allow_unsupported_database]` + +Set this to true if you are determined to use a database other than MySQL or the embedded demo database based on Apache Derby (not-recommended!). + +`AppConfig[:allow_unsupported_database] = false` + +#### `AppConfig[:allow_non_utf8_mysql_database]` + +Set this to true to skip the standard validation of the character encoding of MySQL tables being set to UTF8 (not-recommended!). + +`AppConfig[:allow_non_utf8_mysql_database] = false` + +### Proxy URLs + +If you are serving user-facing applications via proxy +(i.e., another domain or port, or via https, or for a prefix) it is +recommended that you record those URLs in your configuration + +#### `AppConfig[:frontend_proxy_url] = proc { AppConfig[:frontend_url] }` + +Proxy URL for the frontend (staff interface) + +`AppConfig[:frontend_proxy_url] = proc { AppConfig[:frontend_url] }` + +#### `AppConfig[:public_proxy_url]` + +Proxy URL for the public interface + +`AppConfig[:public_proxy_url] = proc { AppConfig[:public_url] }` + +#### `AppConfig[:oai_proxy_url]` + +Proxy URL for the oai service (if exposed, see OAI section) + +`AppConfig[:oai_proxy_url] = 'http://your-public-oai-url.example.com'` + +#### `AppConfig[:frontend_proxy_prefix]` + +Don't override this setting unless you know what you're doing + +`AppConfig[:frontend_proxy_prefix] = proc { "#{URI(AppConfig[:frontend_proxy_url]).path}/".gsub(%r{/+$}, "/") }` + +#### `AppConfig[:public_proxy_prefix]` + +Don't override this setting unless you know what you're doing + +`AppConfig[:public_proxy_prefix] = proc { "#{URI(AppConfig[:public_proxy_url]).path}/".gsub(%r{/+$}, "/") }` + +### Enable component applications + +Setting any of these false will prevent the associated applications from starting. +Temporarily disabling the frontend and public UIs and/or the indexer may help users +who are running into memory-related issues during migration. + +#### `AppConfig[:enable_backend]` + +`AppConfig[:enable_backend] = true` + +#### `AppConfig[:enable_frontend]` + +`AppConfig[:enable_frontend] = true` + +#### `AppConfig[:enable_public]` + +`AppConfig[:enable_public] = true` + +#### `AppConfig[:enable_solr]` + +`AppConfig[:enable_solr] = true` + +#### `AppConfig[:enable_indexer]` + +`AppConfig[:enable_indexer] = true` + +#### `AppConfig[:enable_docs]` + +`AppConfig[:enable_docs] = true` + +#### `AppConfig[:enable_oai]` + +`AppConfig[:enable_oai] = true` + +### Jetty shutdown + +Some use cases want the ability to shutdown the Jetty service using Jetty's +ShutdownHandler, which allows a POST request to a specific URI to signal +server shutdown. The prefix for this URI path is set to `/xkcd` to reduce the +possibility of a collision in the path configuration. So, full path would be + +`/xkcd/shutdown?token={randomly generated password}` + +The launcher creates a password to use this, which is stored +in the data directory. This is not turned on by default. + +#### `AppConfig[:use_jetty_shutdown_handler]` + +`AppConfig[:use_jetty_shutdown_handler] = false` + +#### `AppConfig[:jetty_shutdown_path]` + +`AppConfig[:jetty_shutdown_path] = "/xkcd"` + +### Managing multile backend instances + +If you have multiple instances of the backend running behind a load +balancer, list the URL of each backend instance here. This is used by the +real-time indexing, which needs to connect directly to each running +instance. + +By default we assume you're not using a load balancer, so we just connect +to the regular backend URL. + +#### `AppConfig[:backend_instance_urls]` + +`AppConfig[:backend_instance_urls] = proc { [AppConfig[:backend_url]] }` + +### Theme + +For theming customization, see https://docs.archivesspace.org/customization/theming/ + +#### `AppConfig[:frontend_theme]` + +Name of the theme to use on the Staff UI + +`AppConfig[:frontend_theme] = "default"` + +#### `AppConfig[:public_theme]` + +Name of the theme to use on the Public UI + +`AppConfig[:public_theme] = "default"` + +### Session expiration + +#### `AppConfig[:session_expire_after_seconds]` + +Sessions marked as expirable will timeout after this number of seconds of inactivity + +`AppConfig[:session_expire_after_seconds] = 3600` + +#### `AppConfig[:session_nonexpirable_force_expire_after_seconds]` + +Sessions marked as non-expirable will eventually expire too, but after a longer period. + +`AppConfig[:session_nonexpirable_force_expire_after_seconds] = 604800` + +### System usernames + +Hidden (not viewable on the Staff UI User management) system users are automatically created to be used by the indexer, the PUI and the Staff UI in order to access the backend API. + +#### `AppConfig[:search_username]` + +The user name of the hidden system user that the indexer uses to access the backend API +`AppConfig[:search_username] = "search_indexer"` + +#### `AppConfig[:public_username]` + +The user name of the hidden system user that the PUI uses to access the backend API + +`AppConfig[:public_username] = "public_anonymous"` + +#### `AppConfig[:staff_username]` + +The user name of the hidden system user that the Staff UI uses to access the backend API + +`AppConfig[:staff_username] = "staff_system"` + +### Authentication sources + +ArchivesSpace comes with its own user management functionality but can also be configured to authenticate against one or more [LDAP directories](/customization/ldap/). Oauth authentication is available using the [aspace-oauth plugin](https://github.com/lyrasis/aspace-oauth) + +`AppConfig[:authentication_sources] = []` + +### Misc. backlog and snapshot settings + +#### `AppConfig[:realtime_index_backlog_ms]` + +> TODO - Needs more documentation + +`AppConfig[:realtime_index_backlog_ms] = 60000` + +### Notifications configuration + +An internal notification mechanism is used to keep user preferences, enumeration (controlled value list) values, repository information etc. up to date within the UI while minimizing requests to the backend API. + +#### `AppConfig[:notifications_backlog_ms]` + +Notifications older that this amount of miliseconds are considered expired and will not be announced anymore. + +`AppConfig[:notifications_backlog_ms] = 60000` + +#### `AppConfig[:notifications_poll_frequency_ms]` + +How often should notifications be announced. + +`AppConfig[:notifications_poll_frequency_ms] = 1000` + +#### `AppConfig[:max_usernames_per_source]` + +> TODO - Needs more documentation + +`AppConfig[:max_usernames_per_source] = 50` + +#### `AppConfig[:demodb_snapshot_flag]` + +> TODO - Needs more documentation + +`AppConfig[:demodb_snapshot_flag] = proc { File.join(AppConfig[:data_directory], "create_demodb_snapshot.txt") }` + +### Report Configuration + +#### `AppConfig[:report_page_layout]` + +Uses valid values for the CSS3 @page directive's size property: +http://www.w3.org/TR/css3-page/#page-size-prop + +`AppConfig[:report_page_layout] = "letter"` + +#### `AppConfig[:report_pdf_font_paths]` + +> TODO - Needs more documentation + +`AppConfig[:report_pdf_font_paths] = proc { ["#{AppConfig[:backend_url]}/reports/static/fonts/dejavu/DejaVuSans.ttf"] }` + +#### `AppConfig[:report_pdf_font_family]` + +> TODO - Needs more documentation + +`AppConfig[:report_pdf_font_family] = "\"DejaVu Sans\", sans-serif"` + +### Plugins directory + +#### `AppConfig[:plugins_directory]` + +By default, the plugins directory will be in your ASpace Home. +If you want to override that, update this with an absolute path + +`AppConfig[:plugins_directory] = "plugins"` + +### Feedback + +#### `AppConfig[:feedback_url]` + +URL to direct the feedback link. +You can remove this from the footer by making the value blank. + +`AppConfig[:feedback_url] = "http://archivesspace.org/contact"` + +### User registration + +#### `AppConfig[:allow_user_registration]` + +Allow an unauthenticated user to create an account + +`AppConfig[:allow_user_registration] = true` + +### Help Configuration + +#### `AppConfig[:help_enabled]` + +> TODO - Needs more documentation + +`AppConfig[:help_enabled] = true` + +#### `AppConfig[:help_url]` + +> TODO - Needs more documentation + +`AppConfig[:help_url] = "https://archivesspace.atlassian.net/wiki/spaces/ArchivesSpaceUserManual/overview"`` + +#### `AppConfig[:help_topic_base_url]` + +> TODO - Needs more documentation + +`AppConfig[:help_topic_base_url] = "https://archivesspace.atlassian.net/wiki/spaces/ArchivesSpaceUserManual/pages/"`` + +### Shared storage + +#### `AppConfig[:shared_storage]` + +`AppConfig[:shared_storage] = proc { File.join(AppConfig[:data_directory], "shared") }` + +### Background jobs + +#### `AppConfig[:job_file_path]` + +Formerly known as :import_job_path + +> TODO - Needs more documentation + +`AppConfig[:job_file_path] = proc { AppConfig.has_key?(:import_job_path) ? AppConfig[:import_job_path] : File.join(AppConfig[:shared_storage], "job_files") }` + +#### `AppConfig[:job_poll_seconds]` + +> TODO - Needs more documentation + +`AppConfig[:job_poll_seconds] = proc { AppConfig.has_key?(:import_poll_seconds) ? AppConfig[:import_poll_seconds] : 5 }` + +#### `AppConfig[:job_timeout_seconds]` + +> TODO - Needs more documentation + +`AppConfig[:job_timeout_seconds] = proc { AppConfig.has_key?(:import_timeout_seconds) ? AppConfig[:import_timeout_seconds] : 300 }` + +#### `AppConfig[:jobs_cancelable]` + +By default, only allow jobs to be cancelled if we're running against MySQL (since we can rollback) + +`AppConfig[:jobs_cancelable] = proc { (AppConfig[:db_url] != AppConfig.demo_db_url).to_s }` + +### Locations + +#### `AppConfig[:max_location_range]` + +> TODO - Needs more documentation + +`AppConfig[:max_location_range] = 1000` + +### Schema Info check + +#### `AppConfig[:ignore_schema_info_check]` + +ASpace backend will not start if the db's schema_info version is not set +correctly for this version of ASPACE. This is to ensure that all the +migrations have run and completed before starting the app. You can override +this check here. Do so at your own peril. + +`AppConfig[:ignore_schema_info_check] = false` + +### Demo data + +#### `AppConfig[:demo_data_url]` + +This is a URL that points to some demo data that can be used for testing, +teaching, etc. To use this, set an OS environment variable of ASPACE_DEMO = true + +`AppConfig[:demo_data_url] = "https://s3-us-west-2.amazonaws.com/archivesspacedemo/latest-demo-data.zip"` + +### External IDs + +#### `AppConfig[:show_external_ids]` + +Expose external ids in the frontend + +`AppConfig[:show_external_ids] = false` + +### Jetty request/response buffer + +Set the allowed size of the request/response header that Jetty will accept +(anything bigger gets a 403 error). Note if you want to jack this size up, +you will also have to configure your Nginx/Apache as well if you're using that + +#### `AppConfig[:jetty_response_buffer_size_bytes]` + +`AppConfig[:jetty_response_buffer_size_bytes] = 64 * 1024` + +#### `AppConfig[:jetty_request_buffer_size_bytes]` + +`AppConfig[:jetty_request_buffer_size_bytes] = 64 * 1024` + +### Container management configuration fields + +#### `AppConfig[:container_management_barcode_length]` + +Defines global and repo-level barcode validations (validating on length only). +Barcodes that have either no value, or a value between :min and :max, will validate on save. +Set global constraints via :system_default, and use the repo_code value for repository-level constraints. +Note that :system_default will always inherit down its values when possible. + +`AppConfig[:container_management_barcode_length] = {:system_default => {:min => 5, :max => 10}, 'repo' => {:min => 9, :max => 12}, 'other_repo' => {:min => 9, :max => 9} }` + +#### `AppConfig[:container_management_extent_calculator]` + +Globally defines the behavior of the exent calculator. +Use :report_volume (true/false) to define whether space should be reported in cubic +or linear dimensions. +Use :unit (:feet, :inches, :meters, :centimeters) to define the unit which the calculator +reports extents in. +Use :decimal_places to define how many decimal places the calculator should return. + +Example: + +`AppConfig[:container_management_extent_calculator] = { :report_volume => true, :unit => :feet, :decimal_places => 3 }` + +### Record inheritance in public interface + +#### `AppConfig[:record_inheritance]` + +Define the fields for a record type that are inherited from ancestors +if they don't have a value in the record itself. +This is used in common/record_inheritance.rb and was developed to support +the new public UI application. +Note - any changes to record_inheritance config will require a reindex of pui +records to take affect. To do this remove files from indexer_pui_state + +```ruby +AppConfig[:record_inheritance] = { + :archival_object => { + :inherited_fields => [ + { + :property => 'title', + :inherit_directly => true + }, + { + :property => 'component_id', + :inherit_directly => false + }, + { + :property => 'language', + :inherit_directly => true + }, + { + :property => 'dates', + :inherit_directly => true + }, + { + :property => 'extents', + :inherit_directly => false + }, + { + :property => 'linked_agents', + :inherit_if => proc {|json| json.select {|j| j['role'] == 'creator'} }, + :inherit_directly => false + }, + { + :property => 'notes', + :inherit_if => proc {|json| json.select {|j| j['type'] == 'accessrestrict'} }, + :inherit_directly => true + }, + { + :property => 'notes', + :inherit_if => proc {|json| json.select {|j| j['type'] == 'scopecontent'} }, + :inherit_directly => false + }, + { + :property => 'notes', + :inherit_if => proc {|json| json.select {|j| j['type'] == 'langmaterial'} }, + :inherit_directly => false + }, + ] + } +} +``` + +To enable composite identifiers - added to the merged record in a property +`\_composite_identifier` + +The values for `:include_level` and `:identifier_delimiter` shown here are the defaults + +If `:include_level` is set to true then level values (eg Series) will be included in `\_composite_identifier` + +The `:identifier_delimiter` is used when joining the four part identifier for resources + +```ruby +AppConfig[:record_inheritance][:archival_object][:composite_identifiers] = { + :include_level => false, + :identifier_delimiter => ' ' +} +``` + +To configure additional elements to be inherited use this pattern in your config + +```ruby +AppConfig[:record_inheritance][:archival_object][:inherited_fields] << + { + :property => 'linked_agents', + :inherit_if => proc {|json| json.select {|j| j['role'] == 'subject'} }, + :inherit_directly => true + } +``` + +... or use this pattern to add many new elements at once + +```ruby +AppConfig[:record_inheritance][:archival_object][:inherited_fields].concat( + [ + { + :property => 'subjects', + :inherit_if => proc {|json| + json.select {|j| + ! j['_resolved']['terms'].select { |t| t['term_type'] == 'topical'}.empty? } + }, + :inherit_directly => true + }, + { + :property => 'external_documents', + :inherit_directly => false + }, + { + :property => 'rights_statements', + :inherit_directly => false + }, + { + :property => 'instances', + :inherit_directly => false + }, + ]) +``` + +If you want to modify any of the default rules, the safest approach is to uncomment +the entire default record_inheritance config and make your changes. +For example, to stop scopecontent notes from being inherited into file or item records +uncomment the entire record_inheritance default config above, and add a skip_if +clause to the scopecontent rule, like this: + +```ruby + { + :property => 'notes', + :skip_if => proc {|json| ['file', 'item'].include?(json['level']) }, + :inherit_if => proc {|json| json.select {|j| j['type'] == 'scopecontent'} }, + :inherit_directly => false + }, +``` + +### PUI Configurations + +#### `AppConfig[:pui_search_results_page_size]` + +`AppConfig[:pui_search_results_page_size] = 10` + +#### `AppConfig[:pui_branding_img]` + +`AppConfig[:pui_branding_img] = 'archivesspace.small.png'` + +#### `AppConfig[:pui_block_referrer]` + +`AppConfig[:pui_block_referrer] = true # patron privacy; blocks full 'referer' when going outside the domain` + +#### `AppConfig[:pui_max_concurrent_pdfs]` + +The number of PDFs we'll generate (in the background) at the same time. + +PDF generation can be a little memory intensive for large collections, so we +set this fairly low out of the box. + +`AppConfig[:pui_max_concurrent_pdfs] = 2` + +#### `AppConfig[:pui_pdf_timeout]` + +You can set this to nil or zero to prevent a timeout + +`AppConfig[:pui_pdf_timeout] = 600` + +#### `AppConfig[:pui_hide]` + +`AppConfig[:pui_hide] = {}` + +The following determine which 'tabs' are on the main horizontal menu: + +```ruby +AppConfig[:pui_hide][:repositories] = false +AppConfig[:pui_hide][:resources] = false +AppConfig[:pui_hide][:digital_objects] = false +AppConfig[:pui_hide][:accessions] = false +AppConfig[:pui_hide][:subjects] = false +AppConfig[:pui_hide][:agents] = false +AppConfig[:pui_hide][:classifications] = false +AppConfig[:pui_hide][:search_tab] = false +``` + +The following determine globally whether the various "badges" appear on the Repository page +can be overriden at repository level below (e.g.: +`AppConfig[:repos][{repo_code}][:hide][:counts] = true` + +```ruby +AppConfig[:pui_hide][:resource_badge] = false +AppConfig[:pui_hide][:record_badge] = true # hide by default +AppConfig[:pui_hide][:digital_object_badge] = false +AppConfig[:pui_hide][:accession_badge] = false +AppConfig[:pui_hide][:subject_badge] = false +AppConfig[:pui_hide][:agent_badge] = false +AppConfig[:pui_hide][:classification_badge] = false +AppConfig[:pui_hide][:counts] = false +``` + +The following determines globally whether the 'container inventory' navigation +tab/pill is hidden on resource/collection page + +``` +AppConfig[:pui_hide][:container_inventory] = false +``` + +#### `AppConfig[:pui_requests_permitted_for_types]` + +Determine when the request button is displayed + +`AppConfig[:pui_requests_permitted_for_types] = [:resource, :archival_object, :accession, :digital_object, :digital_object_component]` + +#### `AppConfig[:pui_requests_permitted_for_containers_only]` + +Set to 'true' if you want to disable if there is no top container + +`AppConfig[:pui_requests_permitted_for_containers_only] = false` + +#### `AppConfig[:pui_repos]` + +Repository-specific examples. Replace {repo_code} with your repository code, i.e. 'foo' - note the lower-case + +`AppConfig[:pui_repos] = {}` + +Examples: + +For a particular repository, only enable requests for certain record types (Note this configuration will override AppConfig[:pui_requests_permitted_for_types] for the repository) + +```ruby +AppConfig[:pui_repos]['foo'][:requests_permitted_for_types] = [:resource, :archival_object, :accession, :digital_object, :digital_object_component] +``` + +For a particular repository, disable request + +```ruby +AppConfig[:pui_repos]['foo'][:requests_permitted_for_containers_only] = true +``` + +Set the email address to send any repository requests: + +```ruby +AppConfig[:pui_repos]['foo'][:request_email] = {email address} +``` + +> TODO - Needs more documentation here + +```ruby +AppConfig[:pui_repos]['foo'][:hide] = {} +AppConfig[:pui_repos]['foo'][:hide][:counts] = true +``` + +#### `AppConfig[:pui_display_deaccessions]` + +> TODO - Needs more documentation + +`AppConfig[:pui_display_deaccessions] = true` + +#### `AppConfig[:pui_page_actions_cite]` + +Enable / disable PUI resource/archival object page 'cite' action + +`AppConfig[:pui_page_actions_cite] = true` + +#### `AppConfig[:pui_page_actions_bookmark]` + +Enable / disable PUI resource/archival object page 'bookmark' action + +`AppConfig[:pui_page_actions_bookmark] = true` + +#### `AppConfig[:pui_page_actions_request]` + +Enable / disable PUI resource/archival object page 'request' action + +`AppConfig[:pui_page_actions_request] = true` + +#### `AppConfig[:pui_page_actions_print]` + +Enable / disable PUI resource/archival object page 'print' action + +`AppConfig[:pui_page_actions_print] = true` + +#### `AppConfig[:pui_enable_staff_link]` + +When a user is authenticated, add a link back to the staff interface from the specified record + +`AppConfig[:pui_enable_staff_link] = true` + +#### `AppConfig[:pui_staff_link_mode]` + +By default, staff link will open record in staff interface in edit mode, +change this to 'readonly' for it to open in readonly mode + +`AppConfig[:pui_staff_link_mode] = 'edit'` + +#### `AppConfig[:pui_page_custom_actions]` + +Add page actions via the configuration + +`AppConfig[:pui_page_custom_actions] = []` + +Javascript action example: + +```ruby +AppConfig[:pui_page_custom_actions] << { + 'record_type' => ['resource', 'archival_object'], # the jsonmodel type to show for + 'label' => 'actions.do_something', # the I18n path for the action button + 'icon' => 'fa-paw', # the font-awesome icon CSS class + 'onclick_javascript' => 'alert("do something grand");', +} +``` + +Hyperlink action example: + +```ruby +AppConfig[:pui_page_custom_actions] << { + 'record_type' => ['resource', 'archival_object'], # the jsonmodel type to show for + 'label' => 'actions.do_something', # the I18n path for the action button + 'icon' => 'fa-paw', # the font-awesome icon CSS class + 'url_proc' => proc {|record| 'http://example.com/aspace?uri='+record.uri}, +} +``` + +Form-POST action example: + +```ruby +AppConfig[:pui_page_custom_actions] << { + 'record_type' => ['resource', 'archival_object'], # the jsonmodel type to show for + 'label' => 'actions.do_something', # the I18n path for the action button + 'icon' => 'fa-paw', # the font-awesome icon CSS class + # 'post_params_proc' returns a hash of params which populates a form with hidden inputs ('name' => 'value') + 'post_params_proc' => proc {|record| {'uri' => record.uri, 'display_string' => record.display_string} }, + # 'url_proc' returns the URL for the form to POST to + 'url_proc' => proc {|record| 'http://example.com/aspace?uri='+record.uri}, + # 'form_id' as string to be used as the form's ID + 'form_id' => 'my_grand_action', +} +``` + +ERB action example: + +```ruby +AppConfig[:pui_page_custom_actions] << { + 'record_type' => ['resource', 'archival_object'], + # the jsonmodel type to show for + # 'erb_partial' returns the path to an erb template from which the action will be rendered + 'erb_partial' => 'shared/my_special_action', +} +``` + +#### `AppConfig[:pui_email_enabled]` + +PUI email settings (logs emails when disabled) + +`AppConfig[:pui_email_enabled] = false` + +#### `AppConfig[:pui_email_override]` + +See above AppConfig[:pui_repos][{repo_code}][:request_email] for setting repository email overrides +'pui_email_override' for testing, this email will be the to-address for all sent emails + +`AppConfig[:pui_email_override] = 'testing@example.com'` + +#### `AppConfig[:pui_request_email_fallback_to_address]` + +The 'to' email address for repositories that don't define their own email + +`AppConfig[:pui_request_email_fallback_to_address] = 'testing@example.com'` + +#### `AppConfig[:pui_request_email_fallback_from_address]` + +The 'from' email address for repositories that don't define their own email + +`AppConfig[:pui_request_email_fallback_from_address] = 'testing@example.com'` + +#### `AppConfig[:pui_request_use_repo_email]` + +Use the repository record email address for requests (overrides config email) + +`AppConfig[:pui_request_use_repo_email] = false` + +#### `AppConfig[:pui_email_delivery_method]` + +`AppConfig[:pui_email_delivery_method] = :sendmail` + +#### `AppConfig[:pui_email_sendmail_settings]` + +```ruby +AppConfig[:pui_email_sendmail_settings] = { + location: '/usr/sbin/sendmail', + arguments: '-i' +} +``` + +#### `AppConfig[:pui_email_smtp_settings]` + +Apply when `AppConfig[:pui_email_delivery_method]` set to `:smtp` + +Example SMTP configuration: + +```ruby +AppConfig[:pui_email_smtp_settings] = { + address: 'smtp.gmail.com', + port: 587, + domain: 'gmail.com', + user_name: '<username>', + password: '<password>', + authentication: 'plain', + enable_starttls_auto: true, +} +``` + +#### `AppConfig[:pui_email_perform_deliveries]` + +`AppConfig[:pui_email_perform_deliveries] = true` + +#### `AppConfig[:pui_email_raise_delivery_errors]` + +`AppConfig[:pui_email_raise_delivery_errors] = true` + +#### `AppConfig[:pui_readmore_max_characters]` + +The number of characters to truncate before showing the 'Read More' link on notes + +`AppConfig[:pui_readmore_max_characters] = 450` + +#### `AppConfig[:pui_expand_all]` + +Whether to expand all additional information blocks at the bottom of record pages by default. `true` expands all blocks, `false` collapses all blocks. + +`AppConfig[:pui_expand_all] = false` + +#### `AppConfig[:max_search_columns]` + +Use to specify the maximum number of columns to display when searching or browsing + +`AppConfig[:max_search_columns] = 7` diff --git a/src/content/docs/ja/customization/index.md b/src/content/docs/ja/customization/index.md new file mode 100644 index 0000000..fd97d72 --- /dev/null +++ b/src/content/docs/ja/customization/index.md @@ -0,0 +1,13 @@ +--- +title: Customization and configuration +description: Index of the pages within the Customization section of the website. +--- + +- [Configuring ArchivesSpace](./configuration) +- [Configuring LDAP authentication](./ldap) +- [Adding support for additional username/password-based authentication backends](./authentication) +- [Customizing text in ArchivesSpace](./locales) +- [ArchivesSpace Plug-ins](./plugins) +- [Theming ArchivesSpace](./theming) +- [Managing frontend assets with Bower](./bower) +- [Adding custom reports](./reports) diff --git a/src/content/docs/ja/customization/ldap.md b/src/content/docs/ja/customization/ldap.md new file mode 100644 index 0000000..ca4ac29 --- /dev/null +++ b/src/content/docs/ja/customization/ldap.md @@ -0,0 +1,70 @@ +--- +title: LDAP authentication +description: Instructions on how to manage and authenticate against one or more LDAP directories. +--- + +ArchivesSpace can manage its own user directory, but can also be +configured to authenticate against one or more LDAP directories by +specifying them in the application's configuration file. When a user +attempts to log in, each authentication source is tried until one +matches. + +Here is a minimal example of an LDAP configuration: + +```ruby +AppConfig[:authentication_sources] = [{ + :model => 'LDAPAuth', + :hostname => 'ldap.example.com', + :port => 389, + :base_dn => 'ou=people,dc=example,dc=com', + :username_attribute => 'uid', + :attribute_map => {:cn => :name}, +}] +``` + +With this configuration, ArchivesSpace performs authentication by +connecting to `ldap://ldap.example.com:389/`, binding anonymously, +searching the `ou=people,dc=example,dc=com` tree for `uid = <username>`. + +If the user is found, ArchivesSpace authenticates them by +binding using the password specified. Finally, the `:attribute_map` +entry specifies how LDAP attributes should be mapped to ArchivesSpace +user attributes (mapping LDAP's `cn` to ArchivesSpace's `name` in the +above example). + +Many LDAP directories don't support anonymous binding. To integrate +with such a directory, you will need to specify the username and +password of a user with permission to connect to the directory and +search for other users. Modifying the previous example for this case +looks like this: + +```ruby +AppConfig[:authentication_sources] = [{ + :model => 'LDAPAuth', + :hostname => 'ldap.example.com', + :port => 389, + :base_dn => 'ou=people,dc=example,dc=com', + :username_attribute => 'uid', + :attribute_map => {:cn => :name}, + :bind_dn => 'uid=archivesspace_auth,ou=system,dc=example,dc=com', + :bind_password => 'secretsquirrel', +}] +``` + +Finally, some LDAP directories enforce the use of SSL encryption. To +configure ArchivesSpace to connect via LDAPS, change the port as +appropriate and specify the `encryption` option: + +```ruby +AppConfig[:authentication_sources] = [{ + :model => 'LDAPAuth', + :hostname => 'ldap.example.com', + :port => 636, + :base_dn => 'ou=people,dc=example,dc=com', + :username_attribute => 'uid', + :attribute_map => {:cn => :name}, + :bind_dn => 'uid=archivesspace_auth,ou=system,dc=example,dc=com', + :bind_password => 'secretsquirrel', + :encryption => :simple_tls, +}] +``` diff --git a/src/content/docs/ja/customization/locales.md b/src/content/docs/ja/customization/locales.md new file mode 100644 index 0000000..f408128 --- /dev/null +++ b/src/content/docs/ja/customization/locales.md @@ -0,0 +1,78 @@ +--- +title: Customizing text +description: Instructions for customizing text in ArchivesSpace using locale files, including how to override labels, messages, tooltips, and placeholders via the Rails I18n API. +--- + +ArchivesSpace has abstracted all the labels, messages and tooltips out of the +application into the locale files, which are part of the +[Rails Internationalization (I18n)](http://guides.rubyonrails.org/i18n.html) API. +The locales in this directory represent the +basis of translations for use by all Archives Space applications. Each +application may then add to or override these values with their own locales files. + +For a guide on managing these "i18n" files, please visit http://guides.rubyonrails.org/i18n.html + +You can see the source files for both the [Staff Frontend Application](https://github.com/archivesspace/archivesspace/tree/master/frontend/config/locales) and +[Public Application](https://github.com/archivesspace/archivesspace/tree/master/public/config/locales). There is also a [common locale file](https://github.com/archivesspace/archivesspace/blob/master/common/locales/en.yml) for some values used throughout the ArchivesSpace applications. + +The base translations are broken up: + +- The top most file "en.yml" contains the translations for all the record labels, messages and tooltips in English +- "enums/en.yml" contains the entries for the dynamic enumeration codes - add your translations to this file after importing your enumeration codes + +These values are pulled into the views using the I18n.t() method, like I18n.t("brand.welcome_message"). + +If the value you want to override is in the common locale file (like the "digital object title" field label, for example) , you can change this by simply editing the locales/en.yml file in your ArchivesSpace distribution home directory. A restart is required to have the changes take effect. + +If the value you want to change is in either the public or staff specific en.yml files, you can override these values using the plugins directory. For example, if you want to change the welcome message on the public frontend, make a file in your ArchivesSpace distribution called 'plugins/local/public/locales/en.yml' and put the following values: + +```yaml +en: + brand: + title: My Archive + home: Home + +welcome_message: HEY HEY HEY!! +``` + +If you restart ArchivesSpace, these values will take effect. + +If you are adding a new value you will also need to add the value into the Staff Frontend Application by clicking on the System dropdown menu and choosing Manage Controlled Value Lists. Select the list and add the value. If you restart ArchivesSpace the translation value that you set in the yml file should appear. + +If you're using a different language, simply swap out the en.yml for something else ( like fr.yml ) and update locale setting in the config.rb file ( i.e., AppConfig[:locale] = :fr ) + +## Tooltips + +To add a tooltip to a record label, simply add a new entry with "\_tooltip" +appended to the label's code. For example, to add a tooltip for the Accession's +Title field: + +```yaml +en: + accession: + title: Title + title_tooltip: | + <p>The title assigned to an accession or resource. The accession title + need not be the same as the resource title. Moreover, a title need not + be expressed for the accession record, as it can be implicitly + inherited from the resource record to which the accession is + linked.</p> +``` + +## Placeholders + +For text fields or text areas, you may like to have some placeholder text to be +displayed when the field is empty (for more details see +http://www.w3.org/html/wg/drafts/html/master/forms.html#the-placeholder-attribute). +Please note while most modern browser releases support this feature, +older version will not. + +To add a placeholder to a record's text field, add a new entry of the label's +code append with "\_placeholder". For example: + +```yaml +en: + accession: + title: Title + title_placeholder: See DACS 2.3.18-2.3.22 +``` diff --git a/src/content/docs/ja/customization/plugins.md b/src/content/docs/ja/customization/plugins.md new file mode 100644 index 0000000..c9c4f95 --- /dev/null +++ b/src/content/docs/ja/customization/plugins.md @@ -0,0 +1,343 @@ +--- +title: Plugins +description: An overview of how to develop, structure, enable, and configure plugins in ArchivesSpace to customize application behavior, interface, branding, and search functionality without altering core code. +--- + +Plugins are a powerful feature, designed to allow you to change +most aspects of how the application behaves. + +Plugins provide a mechanism to customize ArchivesSpace by overriding or extending functions +without changing the core codebase. As they are self-contained, they also permit the ready +sharing of packages of customization between ArchivesSpace instances. + +The ArchivesSpace distribution comes with the `hello_world` exemplar plugin. Please refer to its [README file](https://github.com/archivesspace/archivesspace/blob/master/plugins/hello_world/README.md) for a detailed description of how it is constructed and implemented. + +You can find other examples in the following plugin repositories. The ArchivesSpace plugins that are officially supported and maintained by the ArchivesSpace Program Team are in archivesspace-plugins (https://github.com/archivesspace-plugins). Deprecated code which is no longer supported but has been kept for future reference is in archivesspace-deprecated (https://github.com/archivesspace-deprecated). There is an open/unmanaged GitHub repository where community members can share their code called archivesspace-labs (https://github.com/archivesspace-labs). The community developed Python library for interacting with the ArchivesSpace API, called ArchivesSnake, is managed in the archivesspace-labs repository. + +## Enabling plugins + +Plugins are enabled by placing them in the `plugins` directory, and referencing them in the +ArchivesSpace configuration, `config/config.rb`. For example: + +```ruby +AppConfig[:plugins] = ['local', 'hello_world', 'my_plugin'] +``` + +This configuration assumes the following directories exist: + + plugins + hello_world + local + my_plugin + +Note that the order that the plugins are listed in the `:plugins` configuration option +determines the order in which they are loaded by the application. + +## Plugin structure + +The directory structure within a plugin is similar to the structure of the core application. +The following shows the supported plugin structure. Files contained in these directories can +be used to override or extend the behavior of the core application. + + backend + controllers ......... backend endpoints + model ............... database mapping models + converters .......... classes for importing data + job_runners ......... classes for defining background jobs + plugin_init.rb ...... if present, loaded when the backend first starts + lib/bulk_import ..... bulk import processor + frontend + assets .............. static assets (such as images, javascript) in the staff interface + controllers ......... controllers for the staff interface + locales ............. locale translations for the staff interface + views ............... templates for the staff interface + plugin_init.rb ...... if present, loaded when the staff interface first starts + public + assets .............. static assets (such as images, javascript) in the public interface + controllers ......... controllers for the public interface + locales ............. locale translations for the public interface + views ............... templates for the public interface + plugin_init.rb ...... if present, loaded when the public interface first starts + migrations ............ database migrations + schemas ............... JSONModel schema definitions + search_definitions.rb . Advanced search fields + +**Note** that `backend/lib/bulk_import` is the only directory in `backend/lib/` that is loaded by the plugin manager. Other files in `backend/lib/` will not be loaded during startup. + +**Note** that, in order to override or extend the behavior of core models and controllers, you cannot simply put your replacement with the same name in the corresponding directory path. Core models and controllers can be overridden by adding an `after_initialize` block to `plugin_init.rb` (e.g. [aspace-hvd-pui](https://github.com/harvard-library/aspace-hvd-pui/blob/master/public/plugin_init.rb#L43)). + +## Overriding behavior + +A general rule is: to override behavior, rather then extend it, match the path +to the file that contains the behavior to be overridden. + +It is not necessary for a plugin to have all of these directories. For example, to override +some part of a locale file for the staff interface, you can just add the following structure +to the local plugin: + + plugins/local/frontend/locales/en.yml + +More detailed information about overriding locale files is found in [Customizing text in ArchivesSpace](/customization/locales) + +## Overriding the visual (web) presentation + +You can directly override any view file in the core application by placing an erb file of the same name in the analogous path. +For example, if you want to override the appearance of the "Welcome" [home] page of the Public User Interface, you can make your changes to a file `show.html.erb` and place it at `plugins/my_fine_plugin/public/views/welcome/show.html.erb`. (Where _my_fine_plugin_ is the name of your plugin) + +### Implementing a broadly-applied style or javascript change + +Unless you want to write inline style or javascript (which may be practiceable for a template or two), best practice is to create `plugins/my_fine_plugin/public/views/layout_head.html.erb` or `plugins/my_fine_plugin/frontend/views/layout_head.html.erb`, which contains the HTML statements to incorporate your javascript or css into the `<HEAD>` element of the template. Here's an example: + +- For the public interface, I want to change the size of the text in all links when the user is hovering. + - I create `plugins/my_fine_plugin/public/assets/my.css`: + ```css + a:hover { + font-size: 2em; + } + ``` + - I create `plugins/my_fine_plugin/public/views/layout_head.html.erb`, and insert: + ```ruby + <%= stylesheet_link_tag "#{@base_url}/assets/my.css", media: :all %> + ``` +- For the public interface, I want to add some javascript behavior such that, when the user hovers over a list item, astericks appear + - I create `plugins/my_fine_plugin/public/assets/my.js`" + ```javascript + $(function () { + $('li').hover( + function () { + $(this).append($('<span> ***</span>')) + }, + function () { + $(this).find('span:last').remove() + } + ) + }) + ``` + - I add to `plugins/my_fine_plugin/public/views/layout_head.html.erb`: + ```ruby + <%= javascript_include_tag "#{@base_url}/assets/my.js" %> + ``` + +## Adding your own branding + +Another example, to override the branding of the staff interface, add +your own template at: + + plugins/local/frontend/views/site/\_branding.html.erb + +Files such as images, stylesheets and PDFs can be made available as static resources by +placing them in an `assets` directory under an enabled plugin. For example, the following file: + + plugins/local/frontend/assets/my_logo.png + +Will be available via the following URL: + + http://your.frontend.domain.and:port/assets/my_logo.png + +For example, to reference this logo from the custom branding file, use +markup such as: + +```erb + <div class="container branding"> + <img src="<%= #{AppConfig[:frontend_proxy_prefix]} %>assets/my_logo.png" alt="My logo" /> + </div> +``` + +## Customizing the favicon + +A favicon is an icon associated with a web page that browser and operating systems display (ie: in a browser's address bar or tab, next to the web page name in a bookmark list, etc.). + +### Default images + +The ArchivesSpace favicons are stored in the top-level `public/` directory of the frontend and public applications. + +1. `frontend/public/favicon-AS.png` +2. `frontend/public/favicon-AS.svg` +3. `public/public/favicon-AS.png` +4. `public/public/favicon-AS.svg` + +### Markup + +Favicon markup is found in each application's favicon partial template: + +1. `frontend/app/views/site/\_favicon.html.erb` +2. `public/app/views/shared/\_favicon.html.erb` + +### Configuration + +Favicons are shown by default via the configuration options in `config.rb` (or `common/config/config-defaults.rb` in development). Set the respective option to `false` to not show a favicon. + +```rb +# config.rb +AppConfig[:pui_show_favicon] = true # whether or not to show a favicon +AppConfig[:frontend_show_favicon] = true # whether or not to show a favicon +``` + +### Plugin examples + +Replace the default favicon with your own via a plugin. + +:::caution[Reserved favicon filenames] +Custom favicon files must be named something other than `favicon-AS.png` and `favicon-AS.svg` in order to override the default favicon. +::: + +#### Frontend + +The frontend plugin should have the following directory structure: + +``` +plugins/local/frontend/ +├── assets +│   ├── favicon.png +│   └── favicon.svg +└── views + └── site + └── _favicon.html.erb +``` + +The frontend favicon template should look something like: + +```erb +<!-- plugins/local/frontend/views/site/_favicon.html.erb --> +<link rel="icon" type="image/png" href="<%= AppConfig[:frontend_proxy_prefix] %>assets/favicon.png"> +<link rel="icon" type="text/svg+xml" href="<%= AppConfig[:frontend_proxy_prefix] %>assets/favicon.svg"> +``` + +#### Public + +The public plugin should have the following directory structure: + +``` +plugins/local/public/ +├── assets +│   ├── favicon.png +│   └── favicon.svg +└── views + └── shared + └── _favicon.html.erb +``` + +The public favicon template should look something like: + +```erb +<!-- plugins/local/public/views/shared/_favicon.html.erb --> +<link rel="icon" type="image/png" href="<%= asset_path('favicon.png', skip_pipeline: true) %>"> +<link rel="icon" type="image/svg+xml" href="<%= asset_path('favicon.svg', skip_pipeline: true) %>"> +``` + +## Plugin configuration + +Plugins can optionally contain a configuration file at `plugins/[plugin-name]/config.yml`. +This configuration file supports the following options: + + system_menu_controller + The name of a controller that will be accessible via a Plugins menu in the System toolbar + repository_menu_controller + The name of a controller that will be accessible via a Plugins menu in the Repository toolbar + parents + [record-type] + name + cardinality + ... + +`system_menu_controller` and `repository_menu_controller` specify the names of frontend controllers +that will be accessible via the system and repository toolbars respectively. A `Plugins` dropdown +will appear in the toolbars if any enabled plugins have declared these configuration options. The +controller name follows the standard naming conventions, for example: + +```ruby +repository_menu_controller: hello_world +``` + +Points to a controller file at `plugins/hello_world/frontend/controllers/hello_world_controller.rb` +which implements a controller class called `HelloWorldController`. When the menu item is selected +by the user, the `index` action is called on the controller. + +Note that the URLs for plugin controllers are scoped under `plugins`, so the URL for the above +example is: + + http://your.frontend.domain.and:port/plugins/hello_world + +Also note that the translation for the plugin's name in the `Plugins` dropdown menu is specified +in a locale file in the `frontend/locales` directory in the plugin. For example, in the `hello_world` +example there is an English locale file at: + + plugins/hello_world/frontend/locales/en.yml + +The translation for the plugin name in the `Plugins` dropdown menus is specified by the key `label` +under the plugin, like this: + +```yaml +en: + plugins: + hello_world: + label: Hello World +``` + +Note that the example locale file contains other keys that specify translations for text displayed +as part of the plugin's user interface. Be sure to place your plugin's translations as shown, under +`plugins.[your_plugin_name]` in order to avoid accidentally overriding translations for other +interface elements. In the example above, the translation for the `label` key can be referenced +directly in an erb view file as follows: + +```ruby +<%= I18n.t("plugins.hello_world.label") %> +``` + +Each entry under `parents` specifies a record type that this plugin provides a new subrecord for. +`[record-type]` is the name of the existing record type, for example `accession`. `name` is the +name of the plugin in its role as a subrecord of this parent, for example `hello_worlds`. +`cardinality` specifies the cardinality of the plugin records. Currently supported values are +`zero-to-many` and `zero-to-one`. + +## Changing search behavior + +A plugin can add additional fields to the advanced search interface by +including a `search_definitions.rb` file at the top-level of the +plugin directory. This file can contain definitions such as the +following: + +```ruby +AdvancedSearch.define_field(:name => 'payment_fund_code', :type => :enum, :visibility => [:staff], :solr_field => 'payment_fund_code_u_utext') +AdvancedSearch.define_field(:name => 'payment_authorizers', :type => :text, :visibility => [:staff], :solr_field => 'payment_authorizers_u_utext') +``` + +Each field defined will appear in the advanced search interface as a +searchable field. The `:visibility` option controls whether the field +is presented in the staff or public interface (or both), while the +`:type` parameter determines what sort of search is being performed. +Valid values are `:text:`, `:boolean`, `:date` and `:enum`. Finally, +the `:solr_field` parameter controls which field is used from the +underlying index. + +## Adding Custom Reports + +Custom reports may be added to plugins by adding a new report model as a subclass of `AbstractReport` to `plugins/[plugin-name]/backend/model/`, and the translations for said model to `plugins/[plugin-name]/frontend/locales/[language].yml`. Look to existing reports in reports subdirectory of the ArchivesSpace base directory for examples of how to structure a report model. + +There are several limitations to adding reports to plugins, including that reports from plugins may only use the generic report template. ArchivesSpace only searches for report templates in the reports subdirectory of the ArchivesSpace base directory, not in plugin directories. If you would like to implement a custom report with a custom template, consider adding the report to `archivesspace/reports/` instead of `archivesspace/plugins/[plugin-name]/backend/model/`. + +## Frontend Specific Hooks + +To make adding new records fields and sections to record forms a little eaiser via your plugin, the ArchivesSpace frontend provides a series of hooks via the `frontend/config/initializers/plugin.rb` module. These are as follows: + +- `Plugins.add_search_base_facets(*facets)` - add to the base facets list to include extra facets for all record searches and listing pages. + +- `Plugins.add_search_facets(jsonmodel_type, *facets)` - add facets for a particular JSONModel type to be included in searches and listing pages for that record type. + +- `Plugins.add_resolve_field(field_name)` - use this when you have added a new field/relationship and you need it to be resolved when the record is retrieved from the API. + +- `Plugins.register_edit_role_for_type(jsonmodel_type, role)` - when you add a new top level JSONModel, register it and its edit role so the listing view can determine if the "Edit" button can be displayed to the user. + +- `Plugins.register_note_types_handler(proc)` where proc handles parameters `jsonmodel_type, note_types, context` - allow a plugin to customize the note types shown for particular JSONModel type. For example, you can filter those that do not apply to your institution. + +- `Plugins.register_plugin_section(section)` - allows you define a template to be inserted as a section for a given JSONModel record. A section is a type of `Plugins::AbstractPluginSection` which defines the source `plugin`, section `name`, the `jsonmodel_types` for which the section should show and any `opts` required by the templates at the time of render. These new sections (readonly, edit and sidebar additions) are output as part of the `PluginHelper` render methods. + + `Plugins::AbstractPluginSection` can be subclassed to allow flexible inclusion of arbitrary HTML. There are two examples provided with ArchivesSpace: + - `Plugins::PluginSubRecord` - uses the `shared/subrecord` partial to output a standard styled ArchivesSpace section. `opts` requires the jsonmodel field to be defined. + + - `Plugins::PluginReadonlySearch` - uses the `search/embedded` partial to output a search listing as a section. `opts` requires the custom filter terms for this search to be defined. + +## Further information + +**Be sure to test your plugin thoroughly as it may have unanticipated impacts on your +ArchivesSpace application.** diff --git a/src/content/docs/ja/customization/reports.md b/src/content/docs/ja/customization/reports.md new file mode 100644 index 0000000..343513a --- /dev/null +++ b/src/content/docs/ja/customization/reports.md @@ -0,0 +1,51 @@ +--- +title: Reports +description: Instructions for creating custom reports and subreports in ArchivesSpace, including required structure, SQL usage, translations, optional customization methods, and integration with the reporting framework. +--- + +Adding a report is intended to be a fairly simple process. The requirements for creating a report are outlined below. + +## Adding a Report + +### Required + +- Create a class for your report that is a subclass of AbstractReport. +- Call register_report. If your report has any parameters, specify them here. +- Implement query_string + - This should be a raw SQL string + - To prevent SQL injection, use db.literal for any user input i.e. use `"select * from table where column = #{db.literal(value)}" ` instead of `"select * from table where column = '#{value}'"` +- Provide translations for column headers and the title of your report + - They should be in yml files under _language_.reports._report name_ + - The translation for title should be whatever you want the name of the report to be. + - If the translation you want is already in _language_.reports.translation_defaults (found in the static folder) you do not need to specify it. + - Translations specific to the individual report are given priority over translation defaults. + +### Optional + +- Implement your own initializer if your report has any parameters. +- Implement fix_row in order to clean up data and add subreports. + - Each result will be passed to fix_row as a hash + - ReportUtils offers various class methods to simplify cleaning up data. + - You can also add subreports here with something like `row[:subreport_name] = SubreportClassName.new(self, row[:id]).get_content` where row is the result as a hash which was a parameter to fix_row. See [Adding a Subreport](#adding-a-subreport) for more information on adding subreports. + - Sometimes you will want to delete something from the result that you needed in order to generate a subreport but do not want to show up in the final report (such as id). To do this use `row.delete(:id)`. +- Special implementation of query - The default implementation is simply `db.fetch(query_string)` but implementing it yourself may give you more flexibility. In the end, it needs to return a result set. +- There is a hash called info that controls what shows up in the header at the top of the report. Examples may include total record count, total extent, or any parameters that are provided by the user for your report. Add anything you want to show up in the report header to info. Repository name will be included automatically. Be sure to provide translations for the keys you add to info. +- after_tasks is run after fix_row executes on all the results. Implement this if you have anything that needs to get done here before the report is rendered +- Specify identifier_field if you want to add a heading to each individual record. For instance, identifier_field might be `:accession_number` for a report on accessions. +- Implement page_break to be false if you do not want a page break after each record in the PDF of the report. +- Implement special_translation if there is anything you want translate in a special way (i.e. it can't be accomplished by the yml file). + +## Adding A Subreport + +### Required + +- Create a class for your subreport that is a subclass of AbstractSubreport. +- Create an initializer that takes in the parent report/subreport as well as any parameters you need to run the subreport (usually this is just an id from the result in the parent report/subreport). Your initializer should call `super(parent_report)`. +- Implement query_string. This works the same way as it does for reports. +- Provide necessary translations. + +### Optional + +- Special implementation of query +- fix_row works just like in reports + - note that you can add nested subreports diff --git a/src/content/docs/ja/customization/theming.md b/src/content/docs/ja/customization/theming.md new file mode 100644 index 0000000..9e15c0a --- /dev/null +++ b/src/content/docs/ja/customization/theming.md @@ -0,0 +1,141 @@ +--- +title: Theming +description: A guide to customizing the look and feel of ArchivesSpace using plugins or full theme rebuilds, including instructions for changing logos, CSS, and layout elements in both the public and staff interfaces. +--- + +## Making small changes + +It's easiest to use a plugin for small changes to your site's theme. With a plugin, +we can override default views, controllers, models, etc. without having to do a +complete rebuild of the source code. Be sure to remove the `#` at the beginning of +any line that you want to change. Any line that starts with a `#` is ignored. + +Let's say we wanted to change the branding logo on the public +interface. That can be easily changed in your `config.rb` file: + +```ruby +AppConfig[:pui_branding_img] +``` + +That setting is used by the file found in `public/app/views/shared/_header.html.erb` to display your PUI side logo. You don't need to change that file, only the setting in your `config.rb` file. + +You can store the image in `plugins/local/public/assets/images/logo.png` You'll most likely need to create one or more of the directories. + +Your `AppConfig[:pui_branding_img]` setting should look something like this: + +```ruby +AppConfig[:pui_branding_img] = '/assets/images/logo.png' +``` + +Alt text for the PUI branding image can and should also be supplied via: + +```ruby +AppConfig[:pui_branding_img_alt_text] = 'My alt text' +``` + +Be sure to remove the `#` at the beginning of +any line that you want to change. Any line that starts with a `#` is ignored. + +If you want your image on the PUI to link out to another location, you will need to make a change to the file `public/app/views/shared/_header.html.erb`. The line that creates the logo just needs a `a href` added. You should also alter `AppConfig[:pui_branding_img_alt_text]` to make it clear that the image also functions as a link (e.g. `AppConfig[:pui_branding_img_alt_text] = 'Back to Example College Home'`). That will end up looking something like this: + +```erb +<div class="col-sm-3 hidden-xs"><a href="https://example.com"><img class="logo" src="<%= asset_path(AppConfig[:pui_branding_img]) %>" alt="<%= AppConfig[:pui_branding_img_alt_text] %>" /></a></div> +``` + +The Staff Side logo will need a small plugin file and cannot be set in your `config.rb` file. This needs to be changed in the `plugins/local/frontend/views/site/_branding.html.erb` file. You'll most likely need to create one or more of the directories. Then create that `_branding.html.erb` file and paste in the following code: + +```erb +<div class="container-fluid navbar-branding"> + <%= image_tag "archivesspace/archivesspace.small.png", :class=>"img-responsive", :alt=>"My image alt text" %> +</div> +``` + +Change the `"archivesspace/archivesspace.small.png"` to the path to your image `/assets/images/logo.png` and place your logo in the `plugins/local/frontend/assets/images/` directory. You'll most likely need to create one or more of the directories. + +**Note:** Since anything we add to plugins directory will not be precompiled by +the Rails asset pipeline, we cannot use some of the tag helpers +(like img_tag ), since that's assuming the asset is being managed by the +asset pipeline. + +Restart the application and you should see your logo in the default view. + +## Adding CSS rules + +You can customize CSS through the plugin system too. If you don't want to create +a whole new plugin, the easiest way is to modify the 'local' plugin that ships +with ArchivesSpace (it's intended for these kind of site-specific changes). As +long as you've still got 'local' listed in your AppConfig[:plugins] list, your +changes will get picked up. + +To do that, create a file called +`archivesspace/plugins/local/frontend/views/layout_head.html.erb` for the staff +side or `archivesspace/plugins/local/public/views/layout_head.html.erb` for the +public. Then you can add the line to include the CSS in the site: + +```erb +<%= stylesheet_link_tag "#{@base_url}/assets/custom.css" %> +``` + +Then place your CSS in the file: + + staff side: + archivesspace/plugins/local/frontend/assets/custom.css + or public side: + archivesspace/plugins/local/public/assets/custom.css + +and it will get loaded on each page. + +You may also want to make changes to the main index page, or the header and +footer. Those overrides would go into the following places for the public side +of your site: + + archivesspace/plugins/local/public/views/welcome/show.html.erb + archivesspace/plugins/local/public/views/shared/_header.html.erb + archivesspace/plugins/local/public/views/shared/_footer.html.erb + +## Heavy re-theming + +If you're wanting to really trick out your site, you could do this in a plugin +using the override methods shown above, although there are some big disadvantages +to this. The first is that assets will not be compiled by the Rails asset +pipeline. Another is that you won't be able to take advantage of the variables +and mixins that Bootstrap and Less provide as a framework, which really helps +keep your assets well organized. + +A better way to do this is to pull down a copy of the ArchivesSpace code and +build out a new theme. A good resource on how to do this is +[this video](https://www.youtube.com/watch?v=Uny736mZVnk) . +This video covers the staff frontend UI, but the same steps can be applied to +the public UI as well. + +Also become a little familiar with the +[build system instructions ](/development/dev) + +First, pull down a new copy of ArchivesSpace using git and be sure to checkout +a tag matching the version you're using or wanting to use. + +```shell +$ git clone https://github.com/archivesspace/archivesspace.git +$ git checkout v2.5.2 +``` + +You can start your application development server by executing: + +```shell +$ ./build/run bootstrap +$ ./build/run backend:devserver +$ ./build/run frontend:devserver +$ ./build/run public:devserver +``` + +**Note:** You don't have to run all these commands all the time. The bootstrap +command really only has to be run the first time your pull down the code -- +it will also take awhile. You also don't have to start the frontend or public +if you're not working on those interfaces. Backend does have to be started for +either the public or frontend interfaces to work. ) + +Follow the instructions in the video to create a new theme. A good way is to copy the existing default theme to a new folder and start making your updates. Be sure to take advantage of the existing variables set in the Less files to make your assets nice and organized. + +Once you've updated you theme and have got it working, you can package your application. You can use the ./scripts/build_release to build a totally fresh AS distribution, but you don't need to do that if you've simply made some minor changes to the UI. Instead, use the "./build/run public:war " to compile your assets and package a war file. You can then take this public.war file and replace your ASpace distribution war file. + +Be sure to update your theme setting in the config.rb file and restart ASpace. diff --git a/src/content/docs/ja/customization/xsl.md b/src/content/docs/ja/customization/xsl.md new file mode 100644 index 0000000..5ed0605 --- /dev/null +++ b/src/content/docs/ja/customization/xsl.md @@ -0,0 +1,17 @@ +--- +title: XSL stylesheets +description: Provides information about the XSL stylesheets for transforming ArchivesSpace data to EAC-CPF and EAD exports into HTML or PDF, using Saxon for processing. +--- + +ArchivesSpace includes three stylesheets for you to transform exported data +into human-friendly formats. The stylesheets included are as follows: + +- `as-eac-cpf-html.xsl`: Generates HTML from EAC-CPF records +- `as-ead-html.xsl`: Generates HTML from EAD records +- `as-ead-pdf.xsl`: Generates XSL:FO output from EAD for transformation into PDF + +These stylesheets have been tested and are known to work with +[Saxon](http://saxonica.com/download/download_page.xml) 9.5.1.1 and higher. + +The `as-helper-functions.xsl` stylesheet is required by the other three +stylesheets listed above. diff --git a/src/content/docs/ja/development/dev.md b/src/content/docs/ja/development/dev.md new file mode 100644 index 0000000..b33f69d --- /dev/null +++ b/src/content/docs/ja/development/dev.md @@ -0,0 +1,495 @@ +--- +title: Development environment +description: Guidance for setting up a development environment or ArchivesSpace, including system requirements, supported development platforms, a quickstart guide, and step-by-step instructions. +--- + +System requirements: + +- Java 17 +- [Docker](https://www.docker.com/) & [Docker Compose](https://docs.docker.com/compose/) is optional but makes running MySQL and Solr more convenient +- [Supervisord](http://supervisord.org/) is optional but makes running the development servers more convenient +- [mysql-client](https://www.bytebase.com/reference/mysql/how-to/how-to-install-mysql-client-on-mac-ubuntu-centos-windows/) is required in order to load demo data or other sql dumps onto the database + +Currently supported platforms for development: + +- Linux (although generally only Ubuntu is actually used / tested) +- macOS on Intel (x86_64) +- macOS on Apple silicon (ARM64) _since v4.0.0_ + +:::note[Apple silicon and ArchivesSpace before v4.0.0] +To install versions of ArchivesSpace prior to v4.0.0 with macOS on Apple silicon, see [https://teaspoon-consulting.com/articles/archivesspace-on-the-m1.html](https://teaspoon-consulting.com/articles/archivesspace-on-the-m1.html). +::: + +:::danger[Windows development not supported] +Windows is not supported because of issues building gems with C extensions (such as sassc). +::: + +When installing Java, [OpenJDK](https://openjdk.org/) is strongly recommended. Other vendors may work, but OpenJDK is most extensively used and tested. It is highly recommended that you use a version manager such as [mise](https://mise.jdx.dev/lang/java.html) to install Java (OpenJDK). This has proven to be a reliable way of resolving cross platform issues that have occured via other means of installing Java. + +Installing OpenJDK with mise will look something like: + +```bash +mise use -g java@openjdk-17 +``` + +On Linux/Ubuntu it is generally fine to install from system packages: + +```bash +sudo apt install openjdk-$VERSION-jdk-headless +# example: install 17 +sudo apt install openjdk-17-jdk-headless +# update-java-alternatives can be used to switch between versions +sudo update-java-alternatives --list +sudo update-java-alternatives --set $version +``` + +For [Homebrew](https://brew.sh/) users (macOS, Linux), the OpenJDK distribution from Azul has been reported to work: + +```bash +# install Java v17 for example +brew install --cask zulu@17 +``` + +If using Docker & Docker Compose install them following the official documentation: + +- [https://docs.docker.com/get-docker/](https://docs.docker.com/get-docker/) +- [https://docs.docker.com/compose/install/](https://docs.docker.com/compose/install/) + +_Do not use system packages or any other unofficial source as these have been found to be inconsistent with standard Docker._ + +The recommended way of developing ArchivesSpace is to fork the repository and clone it locally. + +_Note: all commands in the following instructions assume you are in the root directory of your local fork +unless otherwise specified._ + +**Quickstart** + +This is an abridged reference for getting started with a limited explanation of the steps: + +```bash +# Build images (required one time only for most use cases) +docker-compose -f docker-compose-dev.yml build +# Run MySQL and Solr in the background +docker-compose -f docker-compose-dev.yml up --detach +# Download the MySQL connector +cd ./common/lib && wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.30/mysql-connector-java-8.0.30.jar && cd - +# Download all application dependencies +./build/run bootstrap +# OPTIONAL: load dev database +gzip -dc ./build/mysql_db_fixtures/demo.sql.gz | mysql --host=127.0.0.1 --port=3306 -u root -p123456 archivesspace +# Setup the development database +./build/run db:migrate +# Clear out any existing Solr state (only needed after a database setup / restore after previous development) +./build/run solr:reset +# Run the development servers +supervisord -c supervisord/archivesspace.conf +# OPTIONAL: Run a backend (api) test (for checking setup is correct) +./build/run backend:test -Dexample="User model" +``` + +## Step by Step explanation + +### Run MySQL and Solr + +ArchivesSpace development requires MySQL and Solr to be running. The easiest and +recommended way to run them is using the Docker Compose configuration provided by ArchivesSpace. + +Start by building the images. This creates a custom Solr image that includes ArchivesSpace's configuration: + +```bash +docker-compose -f docker-compose-dev.yml build +``` + +_Note: you only need to run the above command once. You would only need to rerun this command if a) +you delete the image and therefore need to recreate it, or b) you make a change to ArchivesSpace's Solr +configuration and therefore need to rebuild the image to include the updated configuration._ + +Run MySQL and Solr in the background: + +```bash +docker-compose -f docker-compose-dev.yml up --detach +``` + +By using Docker Compose to run MySQL and Solr you are guaranteed to have the correct connection settings +and don't otherwise need to define connection settings for MySQL or Solr. + +Verify that MySQL & Solr are running: `docker ps`. It should list the running containers: + +```txt +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +ec76bd09d73b mysql:8.0 "docker-entrypoint.s…" 8 hours ago Up 8 hours 33060/tcp, 0.0.0.0:3307->3306/tcp as_test_db +30574171530f archivesspace/solr:latest "docker-entrypoint.s…" 8 hours ago Up 8 hours 0.0.0.0:8984->8983/tcp as_test_solr +d84a6a183bb0 archivesspace/solr:latest "docker-entrypoint.s…" 8 hours ago Up 8 hours 0.0.0.0:8983->8983/tcp as_dev_solr +7df930293875 mysql:8.0 "docker-entrypoint.s…" 8 hours ago Up 8 hours 0.0.0.0:3306->3306/tcp, 33060/tcp as_dev_db +``` + +To check the servers are online: + +- MYSQL: `mysql -h 127.0.0.1 -u as -pas123 archivesspace` +- SOLR: `curl http://localhost:8983/solr/admin/cores` + +To stop and / or remove the servers: + +```bash +docker-compose -f docker-compose-dev.yml stop # shutdowns the servers (data will be preserved) +docker-compose -f docker-compose-dev.yml rm # deletes the containers (all data will be removed) +``` + +**Advanced: running MySQL and Solr outside of Docker** + +You are not required to use Docker for MySQL and Solr. If you run them another way the default +requirements are: + +- dev MySQL, localhost:3306 create db: archivesspace, username: as, password: as123 +- test MySQL, localhost:3307 create db: archivesspace, username: as, password: as123 +- dev Solr, localhost:8983 create archivesspace core using ArchivesSpace configuration +- test Solr, localhost:8984, create archivesspace core using ArchivesSpace configuration + +The defaults can be changed using [environment variables](https://github.com/archivesspace/archivesspace/blob/master/build/build.xml#L43-L46) located in the build file. + +### Download the MySQL connector + +For licensing reasons the MySQL connector must be downloaded separately: + +```bash +cd ./common/lib +wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.30/mysql-connector-java-8.0.30.jar +cd - +``` + +### Run bootstrap + +The bootstrap task: + + ./build/run bootstrap + +Will bootstrap your development environment by downloading all +dependencies--JRuby, Gems, etc. This one command creates a fully +self-contained development environment where everything is downloaded +within the ArchivesSpace project `build` directory. + +_It is not necessary and generally incorrect to manually install JRuby +& bundler etc. for ArchivesSpace (whether with a version manager or +otherwise)._ + +_The self-contained ArchivesSpace development environment typically does +not interact with other J/Ruby environments you may have on your system +(such as those managed by rbenv or similar)._ + +This is the starting point for all ArchivesSpace development. You may need +to re-run this command after fetching updates, or when making changes to +Gemfiles or other dependencies such as those in the `./build/build.xml` file. + +**Errors running bootstrap** + +```txt + [java] INFO: jetty-9.4.44.v20210927; built: 2021-09-27T23:02:44.612Z; git: 8da83308eeca865e495e53ef315a249d63ba9332; jvm 11+28 + [java] Exiting + [java] LoadError: no such file to load -- rails/commands + [java] require at org/jruby/RubyKernel.java:974 + [java] <main> at script/rails:8 +``` + + ./build/run backend:devserver + ./build/run frontend:devserver + ./build/run public:devserver + ./build/run indexer + +There have been various forms of the same `LoadError`. It's a transient error +that is resolved by rerunning bootstrap. + +```txt + [java] org.jruby.Main -I uri:classloader://META-INF/jruby.home/lib/ruby/stdlib -r + [java] ./siteconf20220407-5224-13f6qi7.rb extconf.rb + [java] sh: /Library/Internet: No such file or directory + [java] sh: line 0: exec: /Library/Internet: cannot execute: No such file or directory + [java] + [java] extconf failed, exit code 126 +``` + +This has been seen on Mac platforms resulting from the installation method +for Java. Installing the OpenJDK via Jabba has been effective in resolving +this error. + +**Advanced: bootstrap & the build directory** + +Running bootstrap will download jars to the build directory, including: + +- jetty-runner +- jruby +- jruby-rack + +Gems will be downloaded to: `./build/gems/jruby/$version/gems/`. + +### Setup the development database + +The migrate task: + +```bash +./build/run db:migrate +``` + +Will setup the development database, creating all of the tables etc. +required by the application. + +There is a task for resetting the database: + +```bash +./build/run db:nuke +``` + +Which will first delete then migrate the database. + +### Loading data fixtures into dev database + +When loading a database into the development MySQL instance always ensure that ArchivesSpace +is **not** running. Stop ArchivesSpace if it is running. Run `./build/run solr:reset` to +clear indexer state (a more thorough explanation of this step is described below). + +If you are loading a database and MySQL has already been used for development you'll want to +drop and create an empty database first. + +```bash +mysql -h 127.0.0.1 -u as -pas123 -e "DROP DATABASE archivesspace" +mysql -h 127.0.0.1 -u as -pas123 -e "CREATE DATABASE IF NOT EXISTS archivesspace DEFAULT CHARACTER SET utf8mb4" +``` + +_Note: you can skip the above step if MySQL was just started for the first time or any time you +have an empty ArchivesSpace (one where `db:migrate` has not been run)._ + +Assuming you have MySQL running and an empty `archivesspace` database available you can proceed +to restore: + +```bash +gzip -dc ./build/mysql_db_fixtures/blank.sql.gz | mysql --host=127.0.0.1 --port=3306 -u root -p123456 archivesspace +./build/run db:migrate +``` + +_Note: The above instructions should work out-of-the-box. If you want to use your own database +and / or have configured MySQL differently then adjust the commands as needed._ + +After the restore `./build/run db:migrate` is run to catch any migration updates. You can now +proceed to run the application dev servers, as described below, with data already +populated in ArchivesSpace. + +### Clear out existing Solr state + +The Solr reset task: + +```bash +./build/run solr:reset +``` + +Will wipe out any existing Solr state. This is not required when setting +up for the first time, but is often required after a database reset (such as +after running the `./build/run db:nuke` task). + +_More specifically what this does is submit a delete all request to Solr and empty +out the contents of the `./build/dev/indexer*_state` directories, which is described +below._ + +### Run the development servers + +Use [Supervisord](http://supervisord.org/) for a simpler way of running the development servers with output +for all servers sent to a single terminal window: + +```bash +# run all of the services +supervisord -c supervisord/archivesspace.conf + +# run in api mode (backend + indexer only) +supervisord -c supervisord/api.conf + +# run just the backend (useful for trying out endpoints that don't require Solr) +supervisord -c supervisord/backend.conf +``` + +ArchivesSpace is started with: + +- the staff interface on [http://localhost:3000/](http://localhost:3000/) +- the PUI on [http://localhost:3001/](http://localhost:3001/) +- the API on [http://localhost:4567/](http://localhost:4567/) + +To stop supervisord: `Ctrl-c`. + +#### Advanced: running the development servers directly + +Supervisord is not required, or ideal for every situation. You can run the development +servers directly via build tasks: + +```bash +./build/run backend:devserver # This is the REST API +./build/run frontend:devserver # This is the staff user interface +./build/run public:devserver # This is the public user interface +./build/run indexer # This is the indexer (converts ASpace records to Solr Docs and ships to Solr) +``` + +These should be run in different terminal sessions and do not need to be run +in a specific order or are all required. + +_An example use case for running a server directly is to use the pry debugger._ + +#### Advanced: debugging with pry + +To debug with pry you cannot use supervisord to run the application devserver, +however you can mix and match: + +```bash +# run the backend and indexer with supervisord +supervisord -c supervisord/api.conf + +# in a separate terminal run the frontend directly +./build/run frontend:devserver +``` + +Add `require 'pry-debugger-jruby'; binding.pry` to set breakpoints in the code. This can also be used in views: +`<% require 'pry-debugger-jruby'; binding.pry %>`. Using pry you can easily inspect the `request`, `params` and +in scope instance variables that are available. Typical debugger commands are available: + +- `step`: Step execution into the next line or method. Takes an optional numeric argument to step multiple times. +- `next`: Step over to the next line within the same frame. Takes an optional numeric argument to step multiple times. Differs from step in that it always stays within the same frame (e.g. does not go into other method calls). +- `finish`: Execute until current stack frame returns. +- `continue`: Continue program execution and end the Pry session. +- `puts caller.join("\n")`: Get the current stacktrace. + +See also [pry-debugger-jruby docs](https://gitlab.com/ivoanjo/pry-debugger-jruby). + +#### Advanced: development servers and the build directory + + ./build/run db:migrate + +Running the developments servers will create directories in `./build/dev`: + +- indexer_pui_state: latest timestamps for PUI indexer activity +- indexer_state: latest timestamps for (SUI) indexer activity +- shared: background job files + + ./build/run db:nuke + +_Note: the folders will be created as they are needed, so they may not all be present +at all times._ + +#### Accessing development servers from other devices on the local network + +You can access the ArchivesSpace development servers from other devices on your local network. This is especially useful for testing on mobile operating systems. + +##### Prerequisites + +1. Your development machine and the other device must be on the same WiFi network +2. The ArchivesSpace development servers must be running on the development machine + +##### Steps + +1. Get your development machine's local IP address + + On macOS: + + ```bash + ipconfig getifaddr en0 + ``` + + On Linux: + + ```bash + hostname -I | awk '{print $1}' + ``` + + This returns something like `134.192.0.47`. + +2. Start the [development servers](#run-the-development-servers) + + The development servers bind to `0.0.0.0` by default, making them accessible from other devices on the network (see the [frontend binding example](https://github.com/archivesspace/archivesspace/blob/f77dec627cd1feac77e4b67f9242d617efe80e94/build/build.xml#L899)). + +3. **Access from another device** + + On the other device, open a web browser and navigate to your development machine's IP address with the appropriate port, ie: `http://<your-local-ip>:<port>/`. + + So for IP address `134.192.0.47`: + - Staff interface: `http://134.192.0.47:3000/` + - Public interface: `http://134.192.0.47:3001/` + - API: `http://134.192.0.47:4567/` + +## Running the tests + +### Backend tests + +_By default the tests are configured to run using a separate MySQL & Solr from the +development servers. This means that the development and test environments will not +interfere with each other._ + +```bash +# run the backend / api tests +./build/run backend:test +``` + +You can also run a single spec file with: + +```bash +./build/run backend:test -Dspec="myfile_spec.rb" +``` + +Or a single example with: + +```bash +./build/run backend:test -Dexample="does something important" +``` + +Or by file line with: + +```bash +./build/run backend:test -Dspec="myfile_spec.rb:123" +``` + +There are specific instructions and requirements for the [UI tests](/development/ui_test) to work. + +**Advanced: tests and the build directory** + +Running the tests may create directories in `./build/test`. These will be +the same as for the development servers as described above. + +## Coverage reports + +You can run the coverage reports using: + + ./build/run coverage + +This runs all of the above tests in coverage mode and, when the run +finishes, produces a set of HTML reports within the `coverage` +directory in your ArchivesSpace project directory. + +## Linting and formatting with Rubocop + +If you are editing or adding source files that you intend to contribute via a pull request, +you should make sure your changes conform to the layout and style rules by running: + + ./build/run rubocop + +Most errors can be auto-corrected by running: + + ./build/run rubocop -Dcorrect=true + +## Submitting a Pull Request + +When you have code ready to be reviewed, open a pull request to ask for it to be +merged into the codebase. + +To help make the review go smoothly, here are some general guidelines: + +- **Your pull request should address a single issue.** + It's better to split large or complicated PRs into discrete steps if possible. This + makes review more manageable and reduces the risk of conflicts with other changes. +- **Give your pull request a brief title, referencing any JIRA or Github issues resolved + by the pull request.** + Including JIRA numbers (e.g. 'ANW-123') explicitly in your pull request title ensures the + PR will be linked to the original issue in JIRA. Similarly, referencing GitHub issue numbers + (e.g. 'Fixes #123') will automatically close that issue when the PR is merged. +- **Fill out as much of the Pull Request template as is possible/relevant.** + This makes it easier to understand the full context of your PR, including any discussions or supporting documentation that went into developing the functionality or resolving the bug. + +## Building a distribution + +See: [Building an Archivesspace Release](/development/release) for information on building a distribution. + +## Generating API documentation + +See: [Building an Archivesspace Release](/development/release) for information on building the documentation. diff --git a/src/content/docs/ja/development/docker.md b/src/content/docs/ja/development/docker.md new file mode 100644 index 0000000..8168231 --- /dev/null +++ b/src/content/docs/ja/development/docker.md @@ -0,0 +1,42 @@ +--- +title: Docker +description: A guide to using the Docker configuration with ArchivesSpace. +--- + +The [Docker](https://www.docker.com/) configuration is used to create [automated builds](https://hub.docker.com/r/archivesspace/archivesspace/) on Docker Hub, which are deployed to [the latest version](http://test.archivesspace.org) when the build completes. + +## Custom builds + +Run ArchivesSpace with MySQL, external Solr and a Web Proxy. Switch to the +branch you want to build: + +```bash +#if you already have running containers and want to clear them out +docker-compose stop +docker-compose rm + +#build the local image +docker-compose build # needed whenever the branch is changed and ready to test +docker-compose up + +#running specific containers +docker-compose up -d db solr # in background +docker-compose up app web # in foreground +>to access running container +docker exec -it archivesspace_app_1 bash +``` + +## Sharing an image + +To share the build image the easiest way is to create an account on [Docker Hub](https://hub.docker.com/). Next retag the image and push to the hub account: + +```bash +DOCKER_ID_USER=example +TAG=awesome-updates +docker tag archivesspace_app:latest $DOCKER_ID_USER/archivesspace:$TAG +docker push $DOCKER_ID_USER/archivesspace:$TAG +``` + +To download the image: `docker pull example/archivesspace:awesome-updates`. + +--- diff --git a/src/content/docs/ja/development/e2e_tests.md b/src/content/docs/ja/development/e2e_tests.md new file mode 100644 index 0000000..2a78b10 --- /dev/null +++ b/src/content/docs/ja/development/e2e_tests.md @@ -0,0 +1,152 @@ +--- +title: ArchivesSpace End-to-End Test Suite +description: Instructions on running the end-to-end test suite. +--- + +For more context on the [End-to-End test suite](https://github.com/archivesspace/archivesspace/tree/master/e2e-tests) and how to contribute tests, see our [wiki-page](https://archivesspace.atlassian.net/wiki/spaces/ADC/pages/4606590990/How+to+contribute+End+to+End+test+scenarios). + +## Recommended setup + +### Using a version manager + +The required Ruby version for the e2e test application is documented in `[./.ruby-version](./.ruby-version)`. + +It is strongly recommended to use a version manager (such as [mise](https://mise.jdx.dev/)) to be able to switch to any version that a given project requires. + +#### mise + +We recommend using [mise](https://mise.jdx.dev/) to manage Ruby (and other runtimes). Installation instructions are available at [Getting started](https://mise.jdx.dev/getting-started.html). + +#### Alternatives to `mise` + +If you wish to use a different Ruby manager or installation method, see [Ruby's installation documentation](https://www.ruby-lang.org/en/documentation/installation/). + +### Installation + +From the ArchivesSpace root directory, navigate to the e2e test application, then install Ruby, Bundler, and the application dependencies: + +```sh +# 1. Navigate to e2e-tests directory +cd e2e-tests + +# 2. Install Ruby at the version specified in ./.tool-versions +mise install + +# 3. Install the Bundler dependency manager +gem install bundler + +# 4. Install project dependencies +bundle install +``` + +## Running the tests locally + +### Just working on the e2e tests with Docker + +If you are just working on e2e tests and not touching the ArchivesSpace application, you can run e2e tests locally against the latest ArchivesSpace `master` branch build using Docker. + +#### Install Docker Desktop + +[Docker Desktop](https://www.docker.com/get-started/) is a one-click-install application for Linux, Mac, and Windows. It provides both terminal and GUI access to Docker. Download and install the appropriate version for your operating system from the link above. You can also use alternative software for running Docker containers, such as [OrbStack](https://orbstack.dev/) for macOS. + +#### Run the latest ArchivesSpace Docker image + +```sh +# Get the latest ArchivesSpace `master` branch build +docker compose pull + +# Start ArchivesSpace servers +docker compose up +``` + +Verify the servers are running by opening [http://localhost:8080](http://localhost:8080) in a browser. + +### Working with an ArchivesSpace development environment + +You can run the e2e test suite against your local ArchivesSpace development environment. But be aware that your database, Solr index, and any configuration changes will need to be reset. + +#### Reset your database and Solr index + +Make sure your ArchivesSpace instance has a [blank database](https://docs.archivesspace.org/development/dev/#loading-data-fixtures-into-dev-database) and [blank solr index](https://docs.archivesspace.org/development/dev/#clear-out-existing-solr-state). + +#### Restore default configuration options (except for `AppConfig[:db_url]`) + +Make sure you override any local changes to the default configuration options (via ../common/config/config.rb) by commenting them out or deleting them, except for `AppConfig[:db_url]` (which is required for using the MySQL database). + +#### Run the frontend dev server + +Start the `frontend:devserver` as described [here](https://docs.archivesspace.org/development/dev/#run-the-development-servers). Verify it is running by opening [http://localhost:3000/](http://localhost:3000/) in your browser. + +#### Run the public dev server + +Start the `public:devserver` as described [here](https://docs.archivesspace.org/development/dev/#run-the-development-servers). Verify it is running by opening [http://localhost:3001/](http://localhost:3001/) in your browser. + +#### Set the `STAFF_URL` environment variable + +Set your `STAFF_URL` environment variable to point the e2e tests at the local development server: + +```sh +export STAFF_URL='http://localhost:3000' +``` + +#### Set the `PUBLIC_URL` environment variable + +Set your `PUBLIC_URL` environment variable to point the e2e tests at the local public interface: + +```sh +export PUBLIC_URL='http://localhost:3001' +``` + +## Running tests + +After setting the appropriate `STAFF_URL` and `PUBLIC_URL` environment variables as described above, run the desired test(s) according to the following commands. + +### All test files at once + +```sh +bundle exec cucumber staff_features/ +``` + +### All scenarios in a specific file + +```sh +bundle exec cucumber staff_features/assessments/assessment_create.feature +``` + +### A specific scenario in a specific file + +```sh +bundle exec cucumber staff_features/assessments/assessment_create.feature --name 'Assessment is created' +``` + +## Debugging + +Add a `byebug` statement in any `.rb` file to set a breakpoint and start a debugging session in the console while running. See more [here](https://github.com/deivid-rodriguez/byebug). Don't forget to remove any `byebug` statements before a `git push`... + +If you need to see the browser while running the test scenario and debugging, add a `HEADLESS=''` argument, as in: + +```sh +bundle exec cucumber HEADLESS='' staff_features/ +``` + +## Linters + +This test suite uses two linters, [`cuke_linter`](https://github.com/enkessler/cuke_linter) and [`rubocop`](https://rubocop.org/), to maintain code quality. + +```sh +# Lints Cucumber .feature files +bundle exec cuke_linter + +# Lints Ruby .rb files +bundle exec rubocop +``` + +## Editor integration (optional) + +ArchivesSpace provides optional VS Code workspace tasks that can run the end-to-end test suite without manually setting environment variables or changing directories. + +These tasks execute the same cucumber commands described above and are simply a convenience wrapper around the documented command-line workflow. + +Setup instructions are documented in the **VS Code guide** [here](https://docs.archivesspace.org/development/vscode/). + +Contributors not using VS Code can ignore this section and run the tests directly from the command line. diff --git a/src/content/docs/ja/development/ead-exporter.md b/src/content/docs/ja/development/ead-exporter.md new file mode 100644 index 0000000..55cc9cb --- /dev/null +++ b/src/content/docs/ja/development/ead-exporter.md @@ -0,0 +1,31 @@ +--- +title: Repository EAD Exporter +description: A guide to export all published resources' EAD within a specified repository into a single zip archive. +--- + +Exports all published resource record EAD XML files associated with a single +repository into a zip archive. This zip file will be saved in the ArchivesSpace +data directory (as defined in `config.rb`) and include the repository id in the +filename. + +## Usage + +```sh +./scripts/ead_export.sh user password repository_id +``` + +A best practice would be to put the password in a hidden file such as: + +```sh +touch ~/.aspace_password +chmod 0600 ~/.aspace_password +vi ~/.aspace_password # enter your password +``` + +Then call the script like: + +```sh +./scripts/ead_export.sh user $(cat /home/user/.aspace_password) repository_id +``` + +This way you avoid directly exposing it on the command line or in crontab etc. diff --git a/src/content/docs/ja/development/index.md b/src/content/docs/ja/development/index.md new file mode 100644 index 0000000..e0fdd9d --- /dev/null +++ b/src/content/docs/ja/development/index.md @@ -0,0 +1,13 @@ +--- +title: Development +description: The index to the development section of the ArchivesSpace technical documentation. +--- + +- [Running a development version of ArchivesSpace](./dev.html) +- [Building an ArchivesSpace release](./release.html) +- [Docker](./docker.html) +- [DB versions listed by release](./release_schema_versions.html) +- [User Interface Test Suite](./ui_test.html) +- [Upgrading Rack for ArchivesSpace](./development/jruby-rack-build.html) +- [ArchivesSpace Releases](./releases.html) +- [Using the VS Code editor for local development](./vscode.html) diff --git a/src/content/docs/ja/development/jruby-rack-build.md b/src/content/docs/ja/development/jruby-rack-build.md new file mode 100644 index 0000000..9db3b5e --- /dev/null +++ b/src/content/docs/ja/development/jruby-rack-build.md @@ -0,0 +1,96 @@ +--- +title: Upgrading Rack +description: A guide to upgrading Rack. +--- + +- Install local JRuby (match aspace version, currently: 9.2.12.0) and switch to it. +- Install Maven. +- Download jruby-rack. + +```shell +git checkout 1.1-stable +# install bundler version to match 1.1-stable Gemfile.lock +gem install bundler --version=1.14.6 +``` + +Should result in: + +``` +Fetching bundler-1.14.6.gem +Successfully installed bundler-1.14.6 +Parsing documentation for bundler-1.14.6 +Installing ri documentation for bundler-1.14.6 +Done installing documentation for bundler after 5 seconds +1 gem installed +``` + +Set environment to target rack version (the version being upgraded to): + +```shell +export RACK_VERSION=2.2.3 +bundle +``` + +Should result in: + +``` +Fetching gem metadata from https://rubygems.org/............. +Fetching version metadata from https://rubygems.org/.. +Resolving dependencies... +Installing rake 10.4.2 +Using bundler 1.14.6 +Using diff-lcs 1.2.5 +Installing jruby-openssl 0.9.21 (java) +Using rack 2.2.3 (was 1.6.8) +Using rspec-core 2.14.8 +Using rspec-mocks 2.14.6 +Using appraisal 0.5.2 +Using rspec-expectations 2.14.5 +Using rspec 2.14.1 +Bundle complete! 5 Gemfile dependencies, 10 gems now installed. +Use `bundle show [gemname]` to see where a bundled gem is installed. +``` + +This will have bumped the Rack version in Gemfile.lock: + +```diff +diff --git a/Gemfile.lock b/Gemfile.lock +index 493c667..f016925 100644 +--- a/Gemfile.lock ++++ b/Gemfile.lock +@@ -6,7 +6,7 @@ GEM + rake + diff-lcs (1.2.5) + jruby-openssl (0.9.21-java) +- rack (1.6.8) ++ rack (2.2.3) + rake (10.4.2) + rspec (2.14.1) + rspec-core (~> 2.14.0) +@@ -23,7 +23,7 @@ PLATFORMS + DEPENDENCIES + appraisal + jruby-openssl (~> 0.9.20) +- rack (~> 1.6.8) ++ rack (= 2.2.3) + rake (~> 10.4.2) + rspec (~> 2.14.1) +``` + +Build the jruby-rack jar: + +```bash +bundle exec jruby -S rake clean gem SKIP_SPECS=true +``` + +This creates `target/jruby-rack-1.1.21.jar` with Rack 2.2.3. + +Upload the jar to the public s3 bucket, specifying the rack version: + +```bash +aws s3 cp target/jruby-rack-1.1.21.jar \ + s3://as-public-shared-files/jruby-rack-1.1.21_rack-2.2.3.jar \ + --profile archivesspace +``` + +Finally, update `rack_version` in the aspace `build.xml` file. diff --git a/src/content/docs/ja/development/release.md b/src/content/docs/ja/development/release.md new file mode 100644 index 0000000..b157437 --- /dev/null +++ b/src/content/docs/ja/development/release.md @@ -0,0 +1,263 @@ +--- +title: Building a release +description: How to build an ArchivesSpace release. +--- + +- [Pre-release steps](#pre-release-steps) +- [Build the docs](#build-and-publish-the-api-and-yard-docs) +- [Build the release](#building-a-release-yourself) +- [Post the release with release notes](#create-the-release-with-notes) +- [Post-release updates](#post-release-updates) + +## Clone the git repository + +When building a release it is important to start from a clean repository. The +safest way of ensuring this is to clone the repo: + +```shell +git clone https://github.com/archivesspace/archivesspace.git +``` + +## Checkout the release branch and create release tag + +If you are building a major or minor version (see [https://semver.org](https://semver.org)), +start by creating a branch for the release and all future patch releases: + +```shell +git checkout -b release-v1.0.x +git tag v1.0.0 +``` + +If you are building a patch version, just check out the existing branch and see below: + +```shell +git checkout release-v1.0.x +``` + +Patch versions typically arise because a regression or critical bug has arisen since +the last major or minor release. We try to ensure that the "hotfix" is merged into both +master and the release branch without the need to cherry-pick commits from one branch to +the other. The reason is that cherry-picking creates a new commit (with a new commit id) +that contains identical changes, which is not optimal for the repository history. + +It is therefore preferable to start from the release branch when creating a "hotfix" +that needs to be merged into both the release branch and master. The Pull Request should +then be based on the release branch. After that Pull Request has been through Code review, +QA and merged, a second Pull Request should be created to merge the updated release branch +to master. + +Consider the following scenario. The current production release is v1.0.0 and a critical +bug has been discovered. In the time since v1.0.0 was released, new features have been +added to the master branch, intended for release in v1.1.0: + +```shell +git checkout -b oh-no-some-migration-corrupts-some-data origin/release-v1.0.0 +( fixes problem ) +git commit -m "fix bad migration and add a migration to repair corrupted data" +gh pr create -B release-v1.0.x --web +( PR is reviewed and merged to the release branch) +git checkout release-v1.0.x +git pull +git tag v1.0.1 +gh pr create -B master --web +( PR is reviewed and merged to the master branch) +``` + +## Pre-release steps + +### Run the ArchivesSpace rake tasks to check for issues + +Before proceeding further, it’s a good idea to check that there aren’t missing +translations or multiple gem versions. + +1. Bootstrap your current development environment on the latest master branch + by downloading all dependencies--JRuby, Gems, Solr, etc. + + ```shell + build/run bootstrap + ``` + +2. Run the following checks (recommended): + + ```shell + build/run rake -Dtask=check:multiple_gem_versions + ``` + +3. If multiple gem versions are reported, that should be addressed prior to moving on. + +## Build and publish the API and Yard Docs + +API docs are built using the submodule in `docs/slate` and Docker. +YARD docs are built using the YARD gem. At this time, they cover a small +percentage of the code and are not especially useful. + +### Build the API docs + +1. API documentation depends on the [archivesspace/slate](https://github.com/archivesspace/slate) submodule + and on Docker. Slate cannot run on JRuby. + + ```shell + git submodule init + git submodule update + ``` + +2. Run the `doc:api` task to generate Slate API and Yard documentation. (Note: the + API generation requires a DB connection with standard enumeration values.) + + ```shell + ARCHIVESSPACE_VERSION=X.Y.Z APPCONFIG_DB_URL=$APPCONFIG_DB_URL build/run doc:api + ``` + + This generates `docs/slate/source/index.html.md` (Slate source document). + +3. (Optional) Run a docker container to preview API docs. + + ```shell + docker-compose -f docker-compose-docs.yml up + ``` + + Visit `http://localhost:4568` to preview the api docs. + +4. Build the static api files. The api markdown document should already be in `docs/slate/source` (step 2 above). + The api markdown will be rendered to html and moved to `docs/build/api`. + ```shell + docker run --rm --name slate -v $(pwd)/docs/build/api:/srv/slate/build -v $(pwd)/docs/slate/source:/srv/slate/source slatedocs/slate build + ``` + +### Build the YARD docs + +1. Build the YARD docs in the `docs/build/doc` directory: + + ```shell + ./build/run doc:yardoc + ``` + +### Commit built docs and push to Github pages + +1. Double check that you are on a release branch (we don't need this stuff in master). Commit the newly built documentation and push it in the `gh-pages` branch only: + + ```shell + git add docs/build + git commit -m "release-vx.y.z api and yard documentation" + ``` + + Use `git subtree` to push the documentation to the `gh-pages` branch: + + ```shell + git subtree push --prefix docs/build origin gh-pages + ``` + + Published documents should appear a short while later at: + [http://archivesspace.github.io/archivesspace/api](http://archivesspace.github.io/archivesspace/api) + [http://archivesspace.github.io/archivesspace/doc](http://archivesspace.github.io/archivesspace/doc) + + Note: if the push command fails you may need to delete `gh-pages` in the remote repo: + + ```shell + git push origin :gh-pages + ``` + + **Note:** do not push the docs/build directory to the release branch, as it is only meant to be maintained in the `gh-pages` branch. + +## Building a release yourself + +1. Building the actual release is very simple. Run the following: + + ```shell + ./scripts/build_release vX.X.X + ``` + + Replace X.X.X with the version number. This will build and package a release + in a zip file. + +## Building a release on Github + +1. There is no need to build the release yourself. Just push your tag to Github + and trigger the `release` workflow: + ```shell + git push vX.X.X + ``` + Replace X.X.X with the version number. The release will be created as a **draft**, it will not be automatically published. + +## Create the Release with Notes + +### Build the release notes + +**As of v3.4.0, it should no longer necessary to build release notes manually.** + +To manually generate release notes: + +Create a deployment token on your [github developer settings](https://github.com/settings/tokens). + +```shell +export GITHUB_TOKEN={YOUR DEPLOYMENT TOKEN ON GITHUB} +./build/run doc:release_notes -Dcurrent_tag=v3.4.0 -Doutfile=RELEASE_NOTES.md -Dtoken=$GITHUB_TOKEN +``` + +#### Edit Release Page As Neccessary + +If there are any special considerations add them to the release page manually. Special considerations +might include changes that will require 3rd party plugins to be updated or a +that a full reindex is required. + +Example content: + +```md +This release requires a **full reindex** of ArchivesSpace for all functionality to work +correctly. Please follow the [instructions for reindexing](/administration/indexes) +before starting ArchivesSpace with the new version. +``` + +## Post release updates + +After a release has been put out it's time for some maintenance before the next +cycle of development clicks into full gear. Consider the following, depending on +current team consensus: + +### Branches + +Delete merged and stale branches in Github as appropriate. + +### Milestones + +Close the just-released Milestone, adding a due date of today's date. Create a +new Milestone for the anticipated next release (this can be changed later if the +version numbering is changed for some reason). + +### Test Servers + +Review existing test servers, and request the removal of any that are no longer +needed (e.g. feature branches that have been merged for the release). + +### GitHub Issues + +Review existing opening GH issues and close any that have been resolved by +the new release (linking to a specific PR if possible). For the remaining open +issues, review if they are still a problem, apply labels, link to known JIRA +issues, and add comments as necessary/relevant. + +### Accessibility Scan + +Run accessibility scans for both the public and staff sites and file a ticket +for any new and ongoing accessibility errors. + +### PR Assignments + +Begin assigning queued PRs to members of the Core Committers group, making +sure to include the appropriate milestone for the anticipated next release. + +### Dependencies + +#### Gems + +Take a look at all the Gemfile.lock files ( in backend, frontend, public, +etc ) and review the gem versions. Pay close attention to the Rails & Friends +( ActiveSupport, ActionPack, etc ), Rack, and Sinatra versions and make sure +there have not been any security patch versions. There usually are, especially +since Rails sends fix updates rather frequently. + +To update the gems, update the version in Gemfile, delete the Gemfile.lock, and +run ./build/run bootstrap to download everything. Then make sure your test +suite passes. + +Once everything passes, commit your Gemfiles and Gemfile.lock files. diff --git a/src/content/docs/ja/development/release_schema_versions.md b/src/content/docs/ja/development/release_schema_versions.md new file mode 100644 index 0000000..42a75d1 --- /dev/null +++ b/src/content/docs/ja/development/release_schema_versions.md @@ -0,0 +1,41 @@ +--- +title: Database versions by release +description: A list of ArchivesSpace releases and their corresponding database versions. +--- + +| Release | DB Version | +| ------- | ---------- | +| 1.1.0 | 33 | +| 1.1.1 | 35 | +| 1.1.2 | 35 | +| 1.2.0 | 38 | +| 1.3.0 | 56 | +| 1.4.0 | 59 | +| 1.4.1 | 59 | +| 1.4.2 | 59 | +| 1.5.0 | 74 | +| 1.5.1 | 74 | +| 1.5.2 | 75 | +| 1.5.3 | 75 | +| 1.5.4 | 75 | +| 2.0.0 | 84 | +| 2.0.1 | 84 | +| 2.1.0 | 92 | +| 2.1.1 | 92 | +| 2.1.2 | 92 | +| 2.2.0 | 93 | +| 2.2.1 | 94 | +| 2.2.2 | 95 | +| 2.3.0 | 97 | +| 2.3.1 | 97 | +| 2.3.2 | 97 | +| 2.4.0 | 100 | +| 2.4.1 | 100 | +| 2.5.0 | 102 | +| 2.5.1 | 102 | +| 2.5.2 | 108 | +| 2.6.0 | 120 | +| 2.7.0 | 126 | +| 2.7.1 | 129 | +| 2.8.0 | 135 | +| 2.8.1 | 138 | diff --git a/src/content/docs/ja/development/releases.md b/src/content/docs/ja/development/releases.md new file mode 100644 index 0000000..2b31a65 --- /dev/null +++ b/src/content/docs/ja/development/releases.md @@ -0,0 +1,192 @@ +--- +title: Releases +description: A list of Archivesspace releases, their release dates, schema numbers, and links to the release on github. +--- + +3.4.0 May 24, 2023 +The schema number for this release is 172. +https://github.com/archivesspace/archivesspace/tree/v3.4.0 + +3.3.1 Oct 4, 2022 +The schema number for this release is 164 +https://github.com/archivesspace/archivesspace/tree/v3.3.1 + +3.2.0 February 4, 2022 +The schema number for this release is 159. +https://github.com/archivesspace/archivesspace/releases/download/v3.2.0/archivesspace-v3.2.0.zip + +3.1.1 Novemver 19, 2021 +The schema number for this release is 157. +https://github.com/archivesspace/archivesspace/releases/download/v3.1.0/archivesspace-v3.1.1.zip + +3.1.0 September 20, 2021 +The schema number for this release is 157. +https://github.com/archivesspace/archivesspace/releases/download/v3.1.0/archivesspace-v3.1.0.zip + +3.0.2 August 11, 2021 +The schema number for this release is 148. +https://github.com/archivesspace/archivesspace/releases/download/v3.0.2/archivesspace-v3.0.2.zip + +3.0.1 June 4, 2021 +The schema number for this release is 147. +https://github.com/archivesspace/archivesspace/releases/download/v3.0.1/archivesspace-v3.0.1.zip + +3.0.0 May 10, 2021 +The schema number for this release is 147. +[Bug in Release] + +2.8.1 Nov 11, 2020. +The schema number for this release is 138. +https://github.com/archivesspace/archivesspace/releases/download/v2.8.1/archivesspace-v2.8.1.zip + +2.8.0 Jul 16, 2020. +The schema number for this release is 135. +https://github.com/archivesspace/archivesspace/releases/download/v2.8.0/archivesspace-v2.8.0.zip + +2.7.1 Feb 14, 2020. +The schema number for this release is 129. +https://github.com/archivesspace/archivesspace/releases/download/v2.7.1/archivesspace-v2.7.1.zip + +2.7.0 Oct 9, 2019. +The schema number for this release is 126. +https://github.com/archivesspace/archivesspace/releases/download/v2.7.0/archivesspace-v2.7.0.zip + +2.6.0 May 30, 2019. +The schema number for this release is 120. +https://github.com/archivesspace/archivesspace/releases/download/v2.6.0/archivesspace-v2.6.0.zip + +2.5.2 Jan 15, 2019. +The schema number for this release is 108. +https://github.com/archivesspace/archivesspace/releases/download/v2.5.2/archivesspace-v2.5.2.zip + +2.5.1 Oct 17, 2018. +This release includes no new database migrations. +https://github.com/archivesspace/archivesspace/releases/download/v2.5.1/archivesspace-v2.5.1.zip + +2.5.0 Aug 10, 2018. +The schema number for this release is 102. +https://github.com/archivesspace/archivesspace/releases/download/v2.5.0/archivesspace-v2.5.0.zip + +2.4.1 Jun 22, 2018. +This release includes no new database migrations. +https://github.com/archivesspace/archivesspace/releases/download/v2.4.1/archivesspace-v2.4.1.zip + +2.4.0 Jun 7, 2018. +The schema number for this release is 100. +https://github.com/archivesspace/archivesspace/releases/download/v2.4.0/archivesspace-v2.4.0.zip + +2.3.2 Mar 27, 2018. +This release includes no new database migrations. +https://github.com/archivesspace/archivesspace/releases/download/v2.3.2/archivesspace-v2.3.2.zip + +2.3.1 Feb 28, 2018. +This release includes no new database migrations. +https://github.com/archivesspace/archivesspace/releases/download/v2.3.1/archivesspace-v2.3.1.zip + +2.3.0 Feb 5, 2018. +The schema number for this release is 97. +https://github.com/archivesspace/archivesspace/releases/download/v2.3.0/archivesspace-v2.3.0.zip + +2.2.2 Dec 13, 2017. +The schema number for this release is 95. +https://github.com/archivesspace/archivesspace/releases/download/v2.2.2/archivesspace-v2.2.2.zip + +2.2.0 Oct 12, 2017. +The schema number for this release is 93. +https://github.com/archivesspace/archivesspace/releases/download/v2.2.0/archivesspace-v2.2.0.zip + +2.1.2 Sep 1, 2017. +The schema number for this release is 92. +https://github.com/archivesspace/archivesspace/releases/download/v2.1.2/archivesspace-v2.1.2.zip + +2.1.1 Aug 16, 2017. +The schema number for this release is 92. +https://github.com/archivesspace/archivesspace/releases/download/v2.1.1/archivesspace-v2.1.1.zip + +2.1.0 Jul 18, 2017. +The schema number for this release is 92. +https://github.com/archivesspace/archivesspace/releases/download/v2.1.0/archivesspace-v2.1.0.zip + +2.0.1 May 2, 2017. +The schema number for this release is 84. +https://github.com/archivesspace/archivesspace/releases/download/v2.0.1/archivesspace-v2.0.1.zip + +2.0.0 Apr 18, 2017. +The schema number for this release is 84. +https://github.com/archivesspace/archivesspace/releases/download/v2.0.0/archivesspace-v2.0.0.zip + +1.5.4 Mar 16, 2017. +The schema number for this release is 75. +https://github.com/archivesspace/archivesspace/releases/download/v1.5.4/archivesspace-v1.5.4.zip + +1.5.3 Feb 15, 2017. +The schema number for this release is 75. +https://github.com/archivesspace/archivesspace/releases/download/v1.5.3/archivesspace-v1.5.3.zip + +1.5.2 Dec 8, 2016. +The schema number for this release is 75. +https://github.com/archivesspace/archivesspace/releases/download/v1.5.2/archivesspace-v1.5.2.zip + +1.5.1 Jul 29, 2016. +The schema number for this release is 74. +https://github.com/archivesspace/archivesspace/releases/download/v1.5.1/archivesspace-v1.5.1.zip + +1.5.0 Jul 20, 2016. +The schema number for this release is 74. +https://github.com/archivesspace/archivesspace/releases/download/v1.5.0/archivesspace-v1.5.0.zip + +1.4.2 Oct 27, 2015. +The schema number for this release is 59. +https://github.com/archivesspace/archivesspace/releases/download/v1.4.2/archivesspace-v1.4.2.zip + +1.4.1 Oct 13, 2015. +The schema number for this release is 59. +https://github.com/archivesspace/archivesspace/releases/download/v1.4.1/archivesspace-v1.4.1.zip + +1.4.0 Sep 29, 2015. +The schema number for this release is 59. +https://github.com/archivesspace/archivesspace/releases/download/v1.4.0/archivesspace-v1.4.0.zip + +1.3.0 Jun 30, 2015. +The schema number for this release is 56. +https://github.com/archivesspace/archivesspace/releases/download/v1.3.0/archivesspace-v1.3.0.zip + +1.2.0 Mar 30, 2015. +The schema number for this release is 38. +https://github.com/archivesspace/archivesspace/releases/download/v1.2.0/archivesspace-v1.2.0.zip + +1.1.2 Jan 21, 2015. +The schema number for this release is 35. +https://github.com/archivesspace/archivesspace/releases/download/v1.1.2/archivesspace-v1.1.2.zip + +1.1.1 Jan 6, 2015. +The schema number for this release is 35. +https://github.com/archivesspace/archivesspace/archive/refs/tags/v1.1.1.zip (only source available) + +1.1.0 Oct 20, 2014. +The schema number for this release is 33. +https://github.com/archivesspace/archivesspace/releases/download/v1.1.0/archivesspace-v1.1.0.zip + +1.0.9 May 13, 2014. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.9/archivesspace-v1.0.9.zip + +1.0.7.1 March 7, 2014. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.7.1/archivesspace-v1.0.7.1.zip + +1.0.4 Jan 14, 2014. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.4/archivesspace-v1.0.4.zip + +1.0.2 Nov 26, 2013. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.2/archivesspace-v1.0.2.zip + +1.0.1 Nov 1, 2013. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.1/archivesspace-v1.0.1.zip + +1.0.0 Oct 4, 2013. +The schema number for this release is ??? +https://github.com/archivesspace/archivesspace/releases/download/v1.0.0/archivesspace-v1.0.0.zip diff --git a/src/content/docs/ja/development/ui_test.md b/src/content/docs/ja/development/ui_test.md new file mode 100644 index 0000000..c64d6a6 --- /dev/null +++ b/src/content/docs/ja/development/ui_test.md @@ -0,0 +1,140 @@ +--- +title: UI tests +description: Instructions on running automated browser tests with Selenium on the ArchivesSpace UI on both Firefox and Chrome. +--- + +ArchivesSpace's staff and public interfaces use [Selenium](http://docs.seleniumhq.org/) to run automated browser tests. These tests can be run using [Firefox via geckodriver](https://firefox-source-docs.mozilla.org/testing/geckodriver/geckodriver/index.html) and [Chrome](https://sites.google.com/a/chromium.org/chromedriver/home) (either regular Chrome or headless). + +## UI tests with firefox (default) + +Firefox is the default used in our [CI workflows](https://github.com/archivesspace/archivesspace/actions). + +On Ubuntu Linux 22.04 or later, the included Firefox deb package is a transition package that actually installs Firefox through [snap](https://snapcraft.io/). Snap has security restrictions that do not work with automated testing without additional configuration. + +To uninstall the Firefox snap package and reinstall it as a traditional deb package on Ubuntu Linux use: + +```bash +# remove old snap firefox package (if installed) +sudo snap remove firefox + +# create a keyring directory (if not existing) +sudo install -d -m 0755 /etc/apt/keyrings + +# download mozilla key and add it to the keyring +wget -q https://packages.mozilla.org/apt/repo-signing-key.gpg -O- | sudo tee /etc/apt/keyrings/packages.mozilla.org.asc > /dev/null + +# set high priority for the mozilla pakcages +echo "deb [signed-by=/etc/apt/keyrings/packages.mozilla.org.asc] https://packages.mozilla.org/apt mozilla main" | sudo tee -a /etc/apt/sources.list.d/mozilla.list > /dev/null +echo ' +Package: * +Pin: origin packages.mozilla.org +Pin-Priority: 1000 +' | sudo tee /etc/apt/preferences.d/mozilla + +# install firefox +sudo apt update && sudo apt install firefox +``` + +When using firefox, you need to make sure that the version of geckodriver you are using works with your firefox version, see this [compatibility table](https://firefox-source-docs.mozilla.org/testing/geckodriver/Support.html). Get your installed firefox version by running: `firefox --version`. + +On Linux, you can download the geckodriver version that corresponds to your firefox version [here](https://github.com/mozilla/geckodriver/releases). + +On Mac you can use: `brew install geckodriver`. + +## UI tests with Chrome + +To run using Chrome, you must first download the appropriate [ChromeDriver +executable](https://sites.google.com/a/chromium.org/chromedriver/downloads) +and place it somewhere in your OS system path. Mac users with Homebrew may accomplish this via `brew cask install chromedriver`. + +**Please note, you must have either Firefox or Chrome installed on your system to +run these tests. Consult the [Firefox WebDriver](https://developer.mozilla.org/en-US/docs/Web/WebDriver) +or [ChromeDriver](https://sites.google.com/a/chromium.org/chromedriver/home) +documentation to ensure your Selenium, driver, browser, and OS versions all match +and support each other.** + +## Before running: + +Run the bootstrap build task to configure JRuby and all required dependencies: + +```bash +$ cd .. +$ build/run bootstrap +``` + +Note: all example code assumes you are running from your ArchivesSpace project directory. + +## Running the tests: + +```bash +#Frontend tests +./build/run frontend:selenium # Firefox, headless +FIREFOX_OPTS= ./build/run frontend:selenium # Firefox, no-opts = heady + +SELENIUM_CHROME=true ./build/run frontend:selenium # Chrome, headless +SELENIUM_CHROME=true CHROME_OPTS= ./build/run frontend:selenium # Chrome, no-opts = heady + +#Public tests +./build/run public:test # Firefox, headless +FIREFOX_OPTS= ./build/run public:test # Firefox, no-opts = heady + +SELENIUM_CHROME=true ./build/run public:test # Chrome, headless +SELENIUM_CHROME=true CHROME_OPTS= ./build/run public:test # Chrome, no-opts = heady +``` + +Tests can be scoped to specific files or groups: + +```bash +./build/run .. -Dspec='path/to/spec/from/spec/directory' # single file +./build/run .. -Dexample='[description from it block]' # specific block + +#EXAMPLES +./build/run frontend:selenium -Dexample='Repository model' +FIREFOX_OPTS= ./build/run frontend:selenium -Dexample='Repository model'# Firefox, heady + +./build/run public:test -Dspec='features/accessibility_spec.rb' +SELENIUM_CHROME=true CHROME_OPTS= ./build/run public:test -Dspec='features/accessibility_spec.rb' # Chrome, heady +``` + +Test require a backend and a frontend service to be running. To ovoid the overhead of starting and stopping them while developing, you can run tests against a dev backend: + +```bash +# start mysql and solr containers: +docker-compose -f docker-compose-dev.yml up + +# start services: + supervisord -c supervisord/archivesspace.conf + +# run a spec using the started backend: +ASPACE_TEST_BACKEND_URL='http://localhost:4567' ./build/run frontend:test -Dpattern="./features/events_spec.rb" + +# run all examples that contain "can spawn" in their description: +./build/run frontend:test -Dpattern="./features/accessions_spec.rb" -Dexample="can spawn" +``` + +Note, however, that some tests are dependent on a sequence of ordered steps and may not always run cleanly in isolation. In this case, more than the example provided may be run, and/or unexpected fails may result. + +### Saved pages on spec failures + +When frontend specs fail, a screenshot and an html page is saved for each failed example under `frontend/tmp/capybara`. On the CI, a zip file will be available for each failed CI job run under Summary -> Artifacts. In order to load the assets (and not see plain html) when viewing the saved html pages, a dev server should be running locally on port 3000, see [Running a development version of ArchivesSpace](/development/dev). + +### Keeping the test database up to date + +When calling `./build/run frontend:test` to run frontend specs, the following steps happen before the actual specs run: + +- All tables of the test database are dropped: `./build/run db:nuke:test` +- `frontend/spec/fixtures/archivesspace-test.sql` is loaded to the test database: `./build/run db:load:test` +- Any not-yet-applied migrations are run: `./build/run db:migrate:test` + +#### Updating the test database dump + +If any migrations are being applied whenever you run one or all frontend specs, it means that the test database dump `frontend/spec/fixtures/archivesspace-test.sql` has stayed behind. A new test database dump can be created by running: + +```bash +./build/run db:nuke:test +./build/run db:load:test +./build/run db:migrate:test +./build/run db:dump:test +``` + +An updated `frontend/spec/fixtures/archivesspace-test.sql` will be created that can be committed and pushed to a Pull Request. diff --git a/src/content/docs/ja/development/vscode.md b/src/content/docs/ja/development/vscode.md new file mode 100644 index 0000000..729f336 --- /dev/null +++ b/src/content/docs/ja/development/vscode.md @@ -0,0 +1,70 @@ +--- +title: Using the VS Code editor +description: Instructions for using the VS Code editor with ArchiveSpace, including prerequisites and setup. +--- + +ArchivesSpace provides a [VS Code settings file](https://github.com/archivesspace/archivesspace/blob/master/.vscode/settings.json) that makes it easy for contributors using VS Code to follow the code style of the project and work with the end-to-end tests. Using this tool chain in your editor helps fix code format and lint errors _before_ committing files or running tests. In many cases such errors will be fixed automatically when the file being worked on is saved. Errors that can't be fixed automatically will be highlighted with squiggly lines. Hovering your cursor over these lines will display a description of the error to help reach a solution. + +## Prerequisites + +1. [Node.js](https://nodejs.org) +2. [Ruby](https://www.ruby-lang.org/) +3. [VS Code](https://code.visualstudio.com/) + +## Set up VS Code + +### Add system dependencies + +1. [ESLint](https://eslint.org/) +2. [Prettier](https://prettier.io/) +3. [Rubocop](https://rubocop.org/) +4. [Stylelint](https://stylelint.io/) + +#### Rubocop + +```bash +gem install rubocop +``` + +See https://docs.rubocop.org/rubocop/installation.html for further information, including using Bundler. + +#### ESLint, Prettier, Stylelint + +Run the following command from the ArchivesSpace root directory. + +```bash +npm install +``` + +See [package.json](https://github.com/archivesspace/archivesspace/blob/master/package.json) for further details on how these tools are used in ArchivesSpace. + +### Add VS Code extensions + +Add the following extensions via the VS Code command palette or the Extensions panel. (See this [documentation for installing and managing extensions](https://code.visualstudio.com/learn/get-started/extensions)). + +1. [ESLint](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) (dbaeumer.vscode-eslint) +2. [Prettier](https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode) (esbenp.prettier-vscode) +3. [Ruby Rubocop Revised](https://marketplace.visualstudio.com/items?itemName=LoranKloeze.ruby-rubocop-revived) (LoranKloeze.ruby-rubocop-revived) +4. [Stylelint](https://marketplace.visualstudio.com/items?itemName=stylelint.vscode-stylelint) (stylelint.vscode-stylelint) + +Optional — for enhancing work with the end-to-end tests: + +5. [Cucumber](https://marketplace.visualstudio.com/items?itemName=CucumberOpen.cucumber-official) (CucumberOpen.cucumber-official) — see [End-to-end test integration](#end-to-end-test-integration), especially step-definition navigation. + +It's important to note that since these extensions work in tandem with the [VS Code settings file](https://github.com/archivesspace/archivesspace/blob/master/.vscode/settings.json), these settings only impact your ArchivesSpace VS Code Workspace, not your global VS Code User settings. + +The extensions should now work out of the box at this point providing error messages and autocorrecting fixable errors on file save! + +### End-to-end test integration + +The ArchivesSpace repository includes optional VS Code workspace configuration that integrates the Cucumber end-to-end test suite with the editor. The files [`.vscode/example.tasks.json`](https://github.com/archivesspace/archivesspace/blob/master/.vscode/example.tasks.json) and [`.vscode/example.settings.json`](https://github.com/archivesspace/archivesspace/blob/master/.vscode/example.settings.json) are not enabled by default, so they do not override your personal editor configuration. + +**Enable the tasks** + +Copy the example tasks file to `.vscode/tasks.json`. This adds a task that runs the e2e test suite with the correct working directory, Ruby environment, and environment variables. Run it via **Terminal → Run Task… → Cucumber: Run e2e-test** (the same command as in the [e2e test documentation](/development/e2e_tests)). You may optionally supply a feature file path, `file.feature:line`. + +**Step-definition navigation** + +Integrate the contents of `example.settings.json` into your existing `.vscode/settings.json` (do not replace the existing file, but merge the Cucumber-related settings if you desire to use them so your current workspace settings are preserved). + +This configures the Cucumber extension for `e2e-tests/**/*.feature` and shared Ruby step definitions, enabling jump-to-definition, undefined-step detection, and discovery of shared steps. This simplifies contributing new end-to-end tests. diff --git a/src/content/docs/ja/index.mdx b/src/content/docs/ja/index.mdx new file mode 100644 index 0000000..3d6ec85 --- /dev/null +++ b/src/content/docs/ja/index.mdx @@ -0,0 +1,14 @@ +--- +title: ArchivesSpace Technical Documentation +description: Technical documentation for ArchivesSpace, the open source archives management tool. +tableOfContents: false +editUrl: false +issueUrl: false +lastUpdated: false +prev: false +next: false +--- + +import Homepage from '@components/HomePage.astro' + +<Homepage /> diff --git a/src/content/docs/ja/migrations/migrate_from_archivists_toolkit.md b/src/content/docs/ja/migrations/migrate_from_archivists_toolkit.md new file mode 100644 index 0000000..c45195b --- /dev/null +++ b/src/content/docs/ja/migrations/migrate_from_archivists_toolkit.md @@ -0,0 +1,126 @@ +--- +title: Migrating from Archivists' Toolkit +description: Guidelines are for migrating data from Archivists' Toolkit 2.0 Update 16 to all ArchivesSpace 2.1.x or 2.2.x releases using the migration tool provided by ArchivesSpace. +--- + +These guidelines are for migrating data from Archivists' Toolkit 2.0 Update 16 to all ArchivesSpace 2.1.x or 2.2.x releases using the migration tool provided by ArchivesSpace. Migrations of data from earlier versions of the Archivists' Toolkit (AT) or other versions of ArchivesSpace are not supported by these guidelines or migration tool. + +> Note: A migration from Archivists' Toolkit to ArchivesSpace should not be run against an active production database. + +## Preparing for migration + +- Make a copy of the AT instance, including the database, to be migrated and use it as the source of the migration. It is strongly recommended that you not use your AT production instance and database as the source of the migration for the simple reason of protecting the production version from any anomalies that might occur during the migration process. +- Review your source database for the quality of the data. Look for invalid records, duplicate name and subject records, and duplicate controlled values. Irregular data will either be carried forward to the ArchivesSpace instance or, in some cases, block the migration process. +- Select a representative sample of accession, resource, and digital object records to be examined closely when the migration is completed. Make sure to represent in the sample both the simplest and most complicated or extensive records in the overall data collection. + +### Notes + +- An AT subject record will be set to type 'topical' if it does not have a valid AT type statement or its type is not one of the types in ArchivesSpace. Several other AT LookupList values are not present in ArchivesSpace. These LookupList values cannot be added during the AT migration process and will therefore need to be changed in AT prior to migration. For full details on enum (controlled value list) mappings see the data map. You can use the AT Lookup List tool to change values that will not map correctly, as specified by the data map. +- Record audit information (created by, date created, modified by, and date modified) will not migrate from AT to ArchivesSpace. ArchivesSpace will assign new audit data to each record as it is imported into ArchivesSpace. The exception to this is that the username of the user who creates an accession record will be migrated to the accession general note field. +- Implement an ArchivesSpace production version including the setting up of a MySQL database to migrate into. Instructions are included at [Getting Started with ArchivesSpace](/administration/getting_started) and [Running ArchivesSpace against MySQL](/provisioning/mysql). + +## Preparing for Migrating AT Data + +- The migration process is iterative in nature. A migration report is generated at the end of each migration routine. The report indicates errors or issues occurring with the migration. (An example of an AT migration report is provided at the end of this document.) You should use this report to determine if any problems observed in the migration results are best remedied in the source data or in the migrated data in the ArchivesSpace instance. If you address the problems in the source data, then you can simply conduct the migration again. +- However, once you accept the migration and address problems in the migrated data, you cannot migrate the source data again without establishing a new target ArchivesSpace instance. Migrating data to a previously migrated ArchivesSpace database may result in a great many duplicate record error messages and may cause unrecoverable damage to the ArchivesSpace database. +- Please note, data migration can be a very memory and time intensive task due to the large number of records being transferred. As such, we recommend running the AT migration on a computer with at least 2GB of available memory. +- Make sure your ArchivesSpace MySQL database is setup correctly, following the documentation in the ArchivesSpace README file. When creating a MySQL database, you MUST set the default character encoding for the database to be UTF8. This is particularly important if you use a MySQL client, such as Navicat, MySQL Workbench, phpMyAdmin, etc., to create the database. See [Running ArchivesSpace against MySQL](/provisioning/mysql) for more details. +- Increase the maximum Java heap space if you are experiencing time out events. To do so: + - Stop the current ArchivesSpace instance + - Open in a text editor the file "archivesspace.sh" (Linux / Mac OSX) or archivesspace.bat (Windows). The file is located in the ArchivesSpace installation directory. + - Find the text string "-Xmx512m" and change it to "-Xmx1024m". + - Save the file. + - Restart the ArchivesSpace instance. + - Restart the AT migration process. + +## Running the Migration Tool as an AT Plugin + +- Make sure that the AT instance you want to migrate from is shut down. Next, download the "scriptAT.zip" file from the at-migration release github page (https://github.com/archivesspace/at-migration/releases) and copy the file into the plugins folder of the AT instance, overwriting the one that's already there if needed. +- Make sure the ArchivesSpace instance that you are migrating into is up and running. +- Restart the AT instance to load the newly installed plug-in. To run the plug-in go to the "Tools" menu, then select "Script Runtime v1.0", and finally "ArchivesSpace Data Migrator". This will cause the plug-in window to display. + +![AT migrator](../../../../images/at_migrator.jpg) + +- Change the default information in the Migrator UI: + - **Threads** – Used to specify the number of clients that are used to copy Resource records simultaneously. The limit on the number of clients depends on the record size and allocated memory. A number from 4 to 6 is generally a good value to use, but can be reduced if an "Out of Memory Exception" occurs. + - **Host** – The URL and port number of the ArchivesSpace backend server + - **"Copy records when done" checkbox** – Used to specify that the records should + be copied once the repository check has completed. + - **Password** – password for the ArchivesSpace "admin" account. The default value + of "admin" should work unless it was changed by the ArchivesSpace + administrator. + - **Reset Password** – Each user account transferred has its password reset to this. + Please note that users need to change their password when they first log-in + unless LDAP is used for authentication. + - **"Specify Type of Extent Data" Radio button** – If you are using the BYU Plugin, + select that option. Otherwise, leave as the default – Normal or Harvard Plugin. + - **Specify Unlinked Records to NOT Copy checkboxes** – If you have name or + subject records that are not linked to accessions, resources, or digital objects, + you can choose not to migrate those to ArchivesSpace. + - **"Records to Publish?" checkboxes** – Used to specify what types of records + should be published after they are migrated to ArchivesSpace. + - **Text box showing -refid_unique, -term_default** – This is needed for the + functioning of the migration tool. Please do not make changes to this area. + - **Output Console** – Display section for following the migration while it is running + - **View Error Log** – Used to view a printout of all the errors encountered during the + migration process. This can be used while the migration process is underway as well. +- Once you have made the appropriate changes to the UI, there are three buttons to choose from to start the migration process. + - **Copy to ArchivesSpace** – This starts the migration to the ArchivesSpace instance + you have made the appropriate changes to the UI, there are three buttons to + indicated by the Host URL. + - **Run Repository Check** – The repository check searches for, and attempts to fix repository misalignment between Resources and linked Accession/Digital Object records. The fix applied entails copying the linked accession/digital object record to the repository of the resource record in the ArchivesSpace database (those record positions are not modified in the AT database). + + As long as accession records are not linked to multiple Resource records in different repositories, the fix will be valid. Otherwise, you will receive a warning message. For such cases, the Resource and Accession record(s) will still be migrated, but without links to one another; those links will need to be re-established in ArchivesSpace. + + This misalignment problem involves only accession and resource records and not digital object records, as accession and resource records have a many-to-many relationship. Assessments also can have a many-to-many relationship with resources, accessions, and digital objects. However, since assessments are small and quick to copy, they will simply be copied to as many repositories as needed to establish all the appropriate links. + + If the "Copy Records When Done" checkbox is selected, the records will be migrated to the ArchivesSpace instance once the check is completed. + + - **Continue Previous Migration** – If the migration process fails, this is used to skip to the place the failed previous migration left off. This should allow the migration process of resource records to be gracefully restarted without having to clean out the ArchivesSpace backend database and start from scratch. + +- For most part, the data migration process should be automatic, with an error log being generated when completed. However, depending on the particular data, various errors may occur that would require the migration to be re-run after they have been resolved by the user. The time a migration takes to complete will depend on a number of factors (database size, network performance etc.), but can be anywhere from a couple of hours to a few days. +- Data from the following AT modules will migrate: + - Lookup Lists + - Repositories + - Locations + - Users + - Subjects + - Names + - Accessions + - Digital Object and Digital Object Components + - Resources and Resource Components + - Assessments +- Data + - Reports from the following AT modules will not migrate + > INFORMATION MISSING FROM SOURCE DOCUMENT - NEEDS REVIEW!!! + +## Assessing the Migration and Cleaning Up Data + +Use the migration report to assess the fidelity of the migration and to determine whether to: + +- Fix data in the source AT instance and conduct the migration again, or +- Fix data in the target ArchivesSpace instance. + +If you select to fix the data in AT and conduct the migration again, you will need to delete all the content in the ArchivesSpace instance. + +If you accept the migration in the ArchivesSpace instance, the following outlines how to check and fix your data. + +- Re-establish user passwords. While user records will migrate, the passwords associated with them will not. You will need to re-assign those passwords according to the policies or conventions of your repositories. +- Review closely the set of sample records you selected: + - Accessions + - Resources + - Digital objects +- Review the following groups of records, making sure the correct number of records migrated: + - Accessions + - Assessments + - Resources + - Digital objects + - Controlled vocabulary lists + - Subjects + - Agents (Name records in AT) + - Locations + - Collection Management Classifications + - There may be a few extra agent records due to ArchivesSpace defaults or extra assessments if they were linked to records from more than one repository. +- In conducting the reviews, look for duplicate or incomplete records, broken links, or truncated data. +- Take special care to check to make sure your container data and locations are correct. The model for this is significantly different between AT and ArchivesSpace (where locations are tied to a container rather than directly to a resource or accession), so this presents some challenges for migration. +- Merge enumeration values as necessary. For instance, if you had both 'local' and 'local sources' as a source for names, it might be a good idea to merge these values. diff --git a/src/content/docs/ja/migrations/migrate_from_archon.md b/src/content/docs/ja/migrations/migrate_from_archon.md new file mode 100644 index 0000000..f0402fb --- /dev/null +++ b/src/content/docs/ja/migrations/migrate_from_archon.md @@ -0,0 +1,180 @@ +--- +title: Migrating from Archon +description: Guidelines are for migrating data from Archon 3.21-rev3 to all ArchivesSpace 2.2.2 using the migration tool provided by ArchivesSpace. +--- + +These guidelines are for migrating data from Archon 3.21-rev3 to all ArchivesSpace 2.2.2 using the migration tool provided by ArchivesSpace. Migrations of data from earlier versions of the Archon or other versions of ArchivesSpace are not supported by these guidelines or migration tool. + +> Note: A migration from Archon to ArchivesSpace should not be run against an active production database. + +## Preparing for migration + +Select a representative sample of accession, classification, collection, collection content, and digital object records to be examined closely when the migration is completed. Make sure to include both simple and more complicated or extensive records in the sample. + +Review your Archon database for data quality + +### Accession Records + +- Supply an accession date for all records, when possible. If an accession date is not + recorded in Archon, the date of 01/01/9999 will be supplied during the migration process. If you wish to change this default value, you may do so by editing the following file in the new Archon distribution, prior to running the migration: + `packages/core/templates/default/accession-list.inc.php` +- Supply an identifier for all records, when possible. If an identifier is not recorded in Archon, a supplied identifier will be constructed during the migration process, consisting of the date and the truncated accession title. + +### Classification Records + +Ensure that there are no duplicate classification titles at the same level in the classification hierarchy. If the migration tool encounters a duplicate value, some of the save operations for classifications will fail, and you will need to redo the migration. + +### Collection Records + +If normalized dates are not recorded correctly (i.e. if the end date and begin date are reversed), they will not be migrated or may cause the migration to fail. To check for such entries, a system administrator can run the follow query against the database: + +`SELECT ID, Title, NormalDateBegin, NormalDateEnd FROM tblCollections_Collections WHERE NormalDateBegin > NormalDateEnd;` + +### Level/Container Manager + +Review the settings to make sure that each 'level container' is appropriately marked with the correct values for "Intellectual Level" and "Physical Container" and that EAD Values are correctly recorded. + +![Level Container Manager](../../../../images/archon_level.jpg) + +Failure to code level container values correctly may result in incorrect nesting of resource components in ArchivesSpace. While the following information does not need to be acted upon prior to migration, please note the following if you find that content is not nested correctly after you migrate: + +- Collection content records that have a level container that is 'Intellectual Only' will be migrated to ArchivesSpace as resource components. Each level/container that has 'intellectual level' checked should have a valid value recorded in the "EAD Level" field (i.e. class, collection, file, fonds, item, otherlevel, recordgrp, series, subfonds, subgrp, subseries). These values are case sensitive, and all other values will be migrated as "otherlevel" on the collection content/resource component records to which they apply. +- Collection content records that have a level container that is 'Physical Only' will be migrated to ArchivesSpace as instance records of the type 'text' attached to a container in ArchivesSpace. These instance/container records will be attached to the intellectual level or levels that are immediate children of the container record as it was previously expressed in Archon. If the instance/container has no children it will be attached to its parent intellectual level instead. For illustrative purposes, the following screenshots show a container record prior to and following migration. + ![Archon container example](../../../../images/archon_container.jpg) +- Collection content records that have both physical and intellectual levels will be migrated as both resource components and instances. In this case the instance will be attached to the resource component. +- Collection content records that are neither physical nor intellectual levels will be migrated as if they were 'Intellectual Only'. This is not recommended and should be fixed prior to migration. + +### Collection Content Records + +- If a value has not been set in the "Title" or "Inclusive Dates" field of an "intellectual" level/container in Archon, the collection content record being migrated will be supplied a title, based on its "label" value and the "level/container" type set in Archon. + ![Collection Content Records](../../../../images/archon_collection.jpg) +- Optionally, if a migration fails, check for collection content records that reference invalid 'level/containers'. These records are found in the database tables, but are not visible to staff or end users and must be eliminated prior to migration. If not eliminated, the migration will fail. In order to identify these records, you should follow these steps. **Be very careful. If you are uncertain what you are doing, backup the database first or speak with a systems administrator!** +- In MySQL or SQL Server, open the table titled 'tblCollections_LevelContainers'. Note the 'ID' value recorded of each row (i.e. LevelContainer). +- Run a query against tblCollections_Content to find records where the LevelID column references an invalid value. For example, if tblCollections_Level Containers holds 'ID' values1-6 and 8-22: + `SELECT * FROM tblCollections_Content WHERE LevelContainerID > 22 OR (LevelContainerID > 6 AND LevelContainerID < 8);` + This will provide a list of all records with invalid 'LevelID' (i.e. where a record with the primary key referenced by a foreign key cannot be found). Review this list carefully to make sure you are comfortable deleting the records, or change the LevelID to a valid integer if you wish to retain the records. If you choose to delete the records, you will need to do so directly in the database (see below.) If you choose to do the latter, you may need to take additional steps directly in the database to link these records to a valid parent content record or collection; additional instructions can be supplied upon request. +- Run a query to delete the invalid records from the collections content table. For example: + `DELETE FROM tblCollections_Content WHERE LevelContainerID > 22 OR (LevelContainerID > 6 AND LevelContainerID < 8);` +- Optionally, if the migration fails, check for 'duplicate' collection content records. 'Duplicate' records are those that occupy the same node in the collection/content hiearchy. To check for these records, run the following query in mysql or sql server. + `SELECT ParentID, SortOrder, COUNT (*) FROM tblCollections_Content GROUP BY ParentID, SortOrder HAVING COUNT(*) > 1;` +- The query above checks for records that occupy the same branch and same position in the content hierarchy. If you discover such records, the sort order value of one of the records must be changed, so that both records occupy a unique position. In order to do this, run a query that finds all records attached to the parent record, then run an update query to change the sort order of one of the offending records so that each has a unique sort order. For example if the query above returns ParentID as a 'duplicate' value, you would run query one with the appropriate ParentID value to identify the offending records, and query two to fix the problem: + **Query one:** + + `SELECT ID, ParentID, SortOrder, Title FROM tblCollections_Content WHERE ParentID=8619;` + + | ID | ParentID | SortOrder | Title | + | ---- | -------- | --------- | ----------- | + | 8620 | 8619 | 1 | to mother | + | 8621 | 8619 | 1 | from mother | + | 8622 | 8619 | 3 | to father | + | 6823 | 8619 | 4 | from father | + + **Query two:** + + `UPDATE tblCollections_Content SET SortOrder=2 WHERE ID=8621;` + +## Preparing for Migrating Archon Data + +The migration process is iterative in nature. You should plan to do several test migrations, culminating in a final migration. Typically, migration will require assistance from a system administrator. + +The migration tool will connect to your Archon installation, read data from defined 'endpoints', and place the information in a target ArchivesSpace instance. + +A migration report is generated at the end of each migration routine and can be downloaded from the application. The report indicates errors or issues occurring with the migration. Sample data from migration report is provided in [Appendix A](#Appendix-A%3A-Migration-Log-Review). + +You should use this report to determine if any problems observed in the migration results are best remedied in the source data or in the migrated data in the ArchivesSpace instance. If you address the problems in the source data, then you can simply clear the database and conduct the migration again. However, once you accept the migration and make changes to the migrated data in ArchivesSpace, you cannot migrate the source data again without either overwriting the previous migration or establishing a new target ArchivesSpace instance. + +Please note, data migration can be a very memory and time intensive task due to the large amounts of records being transferred. As such, we recommend running the Archon migration tool on a server with at least 2GB of available memory. Test migrations have run from under an hour to twelve hours or more in the case of complex and large instances of Archon. + +Before starting the migration process, make sure that your current Archon installation is up to date: i.e. that you are using version 3.21 rev3. If you are on an earlier version of Archon, make a copy of the Archon instance, including the database, to be migrated and use it as the source of the migration. It is strongly recommended that you not use your Archon production instance and database as the source of the migration for the simple reason of protecting the production version from any anomalies that might occur during the migration process. Upgrade the copy of the Archon instance to version 3.21 rev3 prior to starting the migration process. + +### Get Archon to ArchivesSpace Migration Tool + +Download the latest JAR file release from https://github.com/archivesspace-deprecated/ArchonMigrator/releases/latest. This is an executable JAR file – double click to run it. + +### Install ArchivesSpace Instance + +Implement an ArchivesSpace production version including the setting up of a MySQL database to migrate into. Instructions are included at [Getting Started with ArchivesSpace](/administration/getting_started) and [Running ArchivesSpace against MySQL](/provisioning/mysql) + +### Prepare to Launch Migration + +> **Important Note:** The migration process should be launched from a networked computer with a stable (i.e. wired) connection, and you should turn power save settings off on the client computer you use to launch the migration. So that the migration can proceed in an undisturbed fashion, you should not try to access the ArchivesSpace or Archon front end or public interface until after the migration as completed. **If you fail to follow these instructions, the migration tool may not provide useful feedback and it will be difficult to determine how successful the migration was.** + +For the most part, the data migration process should be automatic, with errors being provided as the tool migrates and a log being made available when migration is complete. Depending on the particular data being migrated, various errors may occur These may require the migration to be re-run after they have been resolved by the user. When this occurs, the MySQL database should be emptied by the system administrator, and the migration rerun after steps are taken to resolve the problem that caused the error. + +The time that the migration takes to complete will depend on a number of factors (database size, network performance etc.), but has been known to take anywhere from a half hour to ten or twelve hours. Most of this time will probably be spent migrating collection records. + +The following Archon datatypes will migrate, and all relationships that exist between these datatypes should be preserved in ArchivesSpace, except as noted in bold below. For each datatype, post- migration cleanup recommendations are provided in parentheses: + +- Editable controlled value lists: + - Subject sources (review post migration and merge values with ArchivesSpace defaults or functionally duplicate values, when possible) + - Creatorsources(reviewpostmigrationandmergevalueswithArchivesSpacedefaults + or functionally duplicate values, when possible) + - Extentunits/types(mergefunctionallyduplicatevalues) o MaterialTypes + - ContainerTypes + - FileTypes + - ProcessingPriorities +- Repositories +- User/logins (users will need to reset password) +- Subjects (subjects of type personal corporate or family name are migrated as Agent + records, and are linked to resources and digital objects in the subject role. Review these + records and merge with duplicate agent names from creator migration, when possible.) +- Creators/Names +- Accessions (The migration tool will supply accession identifiers when these are blank in Archon. Review and change values, if appropriate.) +- Digital Objects: The migration tool will generate digital object metadata records in ArchivesSpace for each digital library record that is stored in your Archon instance. For each file that has an attached digital library record, the migration tool will generate a digital object component and file instance record. In addition, the migration tool will provide a folder containing the source file you uploaded to Archon when you created the record. In order to link these files to the digital file records in ArchivesSpace, you should place the files in a single directory on a webserver. + **To preserve the linkage between the file's metadata in ArchivesSpace, you must provide the base URL to the folder where the objects will be placed.** The migration tool prepends this URL to the filename to form a complete path to the object location, for each file being exported, as shown in the screenshot below. (In version 2.2.2 of ArchivesSpace, with the default digital object templates, these files will be available in the public interface by clicking a link.) +- Locations (Controlled location records are much more granular in ArchivesSpace than in Archon. You should have a location record for each unique combination of location drop down, range, section, and shelf in Archon, and these records should be linked to top container records which are in turn linked to an instance for each collection where they apply.) +- Resources and Resource Components (see locations, above). + Data from the following Archon modules will not migrate to ArchivesSpace +- Books (Book data could be migrated later if a plugin is developed to support this data). +- AVSAP/Assessments + +## Launch Migration Process + +Make sure the ArchivesSpace instance that you are migrating into is up and running, then open up the migration tool. + +![Archon migrator](../../../../images/archon_migrator.jpg) + +1. Change the default information in the migration tool user interface: + - ArchonSource – Supply the base URL for the Archon instance. + - Archon User – Username for an account with full administrator privileges. + - Password – Password for that same account. + - Download Digital Object Files checkbox – Check if you want to move any attached digital object files and supply a webpath to a web accessible folder where you intend to place the digital objects after the migration is complete. + - Set Download Folder – Clicking this will open a file explorer that will allow you to specify the folder to which you want digital files from Archon to be downloaded. + - Set Default Repository checkbox -- Select "Set Default Repository" checkbox to set which Repository Accession and Unlinked digital objects are copied to. The default is "Based on Linked Collection," which will copy Accession records to the same repository of any Collection records they are linked to, or the first repository if they are not. You can also select a specific repository from the drop-down list. + - Host – The URL and port number of the ArchivesSpace backend server. + - ASpace admin – User name for the ArchivesSpace "admin" account. The default value of "admin" should work unless it was changed by the ArchivesSpace administrator. + - Password – Password for the ArchivesSpace "admin" account. The default value of "admin" should work unless it was changed by the ArchivesSpace administrator. + - Reset Password – Each user account transferred has its password reset to this. Please note that users need to change their password when they first log-in unless LDAP is used for authentication. + - Migration Options – This is needed for the functioning of the migration tool. Please do not make changes to this area. + - Output Console – Display section for following the migration while it is running + - View Error Log – Used to view a printout of all the errors encountered during the migration process. This can be used while the migration process is underway as well. +2. Press the "Copy to ArchivesSpace" button to start the migration process. This starts the migration to the ArchivesSpace instance indicated by the Host URL. +3. If the migration process fails: Review the error message provided and /or the migration log. Fix any issues that have been identified, clear the target MySQL and try again. +4. When the process has completed: + - Download the migration report. + - Move digital objects into the folder location corresponding to the URL you provided to the migration tool. + +## Assessing the Migration and Cleaning Up Data + +1. Use the migration report to assess the fidelity of the migration and to determine whether to fix data in the source Archon instance and conduct the migration again, or fix data in the target ArchivesSpace instance. If you select to fix data in Archon, you will need to delete all the content in the ArchivesSpace instance, then rerun the migration after clearing the ArchivesSpace database. +2. Review the following record types, making sure the correct number of records migrated. In conducting the reviews, look for duplicate or incomplete records, broken links, or truncated data. + - Controlled vocabulary lists + - Classifications + - Accessions + - Resources + - Digital objects + - Subjects (not persons, families, and corporate bodies) + - Creators (known as Agents in ArchivesSpace) + - Locations +3. Review closely the set of sample records you selected, comparing data in Archon to data in ArchivesSpace. +4. If you accept the migration in the ArchivesSpace instance, then proceed to re-establish user passwords. While user records will migrate, the passwords associated with them will not. You will need to reassign those passwords according to the policies or conventions of your repositories. + +## Appendix A: Migration Log Review + +The migration log provides a description of any irregularities that take place during a migration and should be saved in a secure location, for future reference. The log contains both save errors and warnings. The warnings should be reviewed after the migration for information, for potential action. + +Most warnings will not require a follow up action. For example, they may note that a supplied value has been provided to meet an ArchivesSpace data model requirement. This occurs for all collections with empty identifiers. Occasionally, warnings will indicate that there was a problem establishing a link between two records for a reason such as a resource component not being found. Warnings like this should be cause for review since they may indicate that some data was lost. + +Save errors will note that a particular piece of data could not be migrated because it is not supported in the ArchivesSpace data model or for some other reason. In these cases, you should review the record in Archon and in ArchivesSpace if it was migrated at all. Oftentimes, these occur due to duplicate records (such as if you have a matching creator and person subject). If a save error occurs due to a duplicate record, this is usually okay but should still be reviewed to make sure there was no data loss. If a save error occurs for any other reason, this typically means the migration will need to be rerun (unless the record it occurred on is not needed or is easier just to migrate manually). + +Typically, the migration log will record the Archon internal IDs of the original Archon object being migrated whenever a save error or warning occurs. This simplifies finding and correcting relevant records. diff --git a/src/content/docs/ja/migrations/migration_tools.md b/src/content/docs/ja/migrations/migration_tools.md new file mode 100644 index 0000000..523f0e4 --- /dev/null +++ b/src/content/docs/ja/migrations/migration_tools.md @@ -0,0 +1,59 @@ +--- +title: Migration tools +description: Links to tools for migrating data into and out of ArchivesSpace. +--- + +## Archivists' Toolkit + +- [AT migration tool instructions](/migrations/migrate_from_archivists_toolkit) +- [AT migration plugin](https://github.com/archivesspace/at-migration/releases) +- [AT migration source code](https://github.com/archivesspace/at-migration) +- [AT migration mapping (for 2.x versions of the tool and ArchivesSpace](https://github.com/archivesspace/at-migration/blob/master/docs/ATMappingDocument.xlsx) + +### Older information + +- [AT migration guidelines (for migrations using the original migration tool through version 1.4.2; only supports migrations to version 1.4.2 or lower of ArchivesSpace)](http://archivesspace.org/wp-content/uploads/2016/08/ATMigrationGuidelines-REV-20140417.pdf) +- [AT migration mapping (for migrations through version 1.4.2 or lower of the tool and ArchivesSpace)](http://archivesspace.org/wp-content/uploads/2016/08/ATMappingDocument_AT-ASPACE_BETA.xls) + +## Archon + +- [Archon migration tool instructions](/migrations/migrate_from_archon) +- [Archon migration tool](https://github.com/archivesspace/archon-migration/releases/latest) +- [Archon migration source code](https://github.com/archivesspace/archon-migration/) +- [Archon migration mapping (for 2.x versions of the tool and ArchivesSpace)](https://docs.google.com/spreadsheets/d/13soN5djk16QYmRoSajtyAc_nBrNldyL58ViahKFJAog/edit?usp=sharing) + +### Older information + +- [refactored Archon migration plugin](https://github.com/archivesspace-deprecated/ArchonMigrator/releases) +- [information about refactoring project](https://archivesspace.atlassian.net/browse/AR-1278) +- [previous Archon migration plugin](https://github.com/archivesspace/archon-migration/releases) +- [Plugin read me text](https://github.com/archivesspace-deprecated/ArchonMigrator/blob/master/README.md) +- [Archon migration guidelines](http://archivesspace.org/wp-content/uploads/2016/05/Archon_Migration_Guidelines-7_13_2017.docx) +- [Archon migration mapping](http://archivesspace.org/wp-content/uploads/2016/08/ArchonSchemaMappingsPublic.xlsx) + +## Data Import and Export Maps + +- [Accession CSV Map](http://archivesspace.org/wp-content/uploads/2016/05/Accession-CSV-mapping-2013-08-05.xlsx) +- [Accession CSV Template](https://github.com/archivesspace/archivesspace/tree/master/templates) +- [Archival Objects from Excel or CSV with Load Via Spreadsheet](https://github.com/archivesspace/archivesspace/tree/master/templates) +- [Assessment CSV Template](https://github.com/archivesspace/archivesspace/tree/master/templates) +- [Digital Object CSV Map](http://archivesspace.org/wp-content/uploads/2016/08/DigitalObject-CSV-mapping-2013-02-26.xlsx) +- [Digital Object CSV Template](https://github.com/archivesspace/archivesspace/tree/master/templates) +- [Digital Objects Export Maps](http://archivesspace.org/wp-content/uploads/2016/08/ASpace-Dig-Object-Exports.xlsx) +- [EAD Import / Export Map](https://archivesspace.org/wp-content/uploads/2021/06/EAD-Import-Export-Mapping-20171030.xlsx) +- [Location Record CSV Template](https://github.com/archivesspace/archivesspace/tree/master/templates) +- (newly reviewed) [MARCXML Import Map](https://archivesspace.org/wp-content/uploads/2021/06/AS-MARC-import-mappings-2021-06-15.xlsx) +- [MARCXML Export Map](https://archivesspace.org/wp-content/uploads/2021/06/MARCXML-Export-Mapping-20130715.xlsx) +- [MARCXML Authority Import / Export Map](https://archivesspace.org/wp-content/uploads/2021/05/Agents-ASpace-to-MARCXMLMay2021.xlsx) +- [EAC-CPF Import / Export Map](https://archivesspace.org/wp-content/uploads/2021/05/Agents-ASpace-to-EAC-CPFMay2021.xlsx) + +(newly reviewed) MARCXML Import Map +MARCXML Export Map + +### OAI-PMH-only maps + +Most ArchivesSpace OAI-PMH responses are based on the export maps above, but there are a few that are only available through OAI-PMH + +[MODS for resources and resource components](https://archivesspace.org/wp-content/uploads/2019/06/MODS-OAI-Export-Mapping-20190610.xlsx) +[Dublin Core for resources and resource components](https://archivesspace.org/wp-content/uploads/2019/06/DC-OAI-Export-Mapping-20190610.xlsx) +[DCMI Metadata Terms for resources and resource components](https://archivesspace.org/wp-content/uploads/2019/06/DCTerms-OAI-Export-Mapping-20190611.xlsx) diff --git a/src/content/docs/ja/provisioning/clustering.md b/src/content/docs/ja/provisioning/clustering.md new file mode 100644 index 0000000..db73b24 --- /dev/null +++ b/src/content/docs/ja/provisioning/clustering.md @@ -0,0 +1,370 @@ +--- +title: Load balancing and multiple tenants +description: Guidelines for running ArchivesSpace in a clustered environment for load-balancing purposes, and for supporting multiple tenants. +--- + +This document describes two aspects of running ArchivesSpace in a +clustered environment: for load-balancing purposes, and for supporting +multiple tenants (isolated installations of the system in a common +deployment environment). + +The configuration described in this document is one possible approach, +but it is not intended to be prescriptive: the application layer of +ArchivesSpace is stateless, so any mechanism you prefer for load +balancing across web applications should work just as well as the one +described here. + +Unless otherwise stated, it is assumed that you have root access on +your machines, and all commands are to be run as root (or with sudo). + +## Architecture overview + +This document assumes an architecture with the following components: + +- A load balancer machine running the Nginx web server +- Two application servers, each running a full ArchivesSpace + application stack +- A MySQL server +- A shared NFS volume mounted under `/aspace` on each machine + +## Overview of files + +The `files` directory in this repository (in the same directory as this +`README.md`) contains what will become the contents of the `/aspace` +directory, shared by all servers. It has the following layout: + + /aspace + ├── archivesspace + │   ├── config + │   │   ├── config.rb + │   │   └── tenant.rb + │   ├── software + │   └── tenants + │   └── \_template + │   └── archivesspace + │   ├── config + │   │   ├── config.rb + │   │   └── instance_hostname.rb.example + │   └── init_tenant.sh + └── nginx + └── conf + ├── common + │   └── server.conf + └── tenants + └── \_template.conf.example + +The highlights: + +- `/aspace/archivesspace/config/config.rb` -- A global configuration file for all ArchivesSpace instances. Any configuration options added to this file will be applied to all tenants on all machines. +- `/aspace/archivesspace/software/` -- This directory will hold the master copies of the `archivesspace.zip` distribution. Each tenant will reference one of the versions of the ArchivesSpace software in this directory. +- `/aspace/archivesspace/tenants/` -- Each tenant will have a sub-directory under here, based on the `_template` directory provided. This holds the configuration files for each tenant. +- `/aspace/archivesspace/tenants/[tenant name]/config/config.rb` -- The global configuration file for [tenant name]. This contains tenant-specific options that should apply to all of the tenant's ArchivesSpace instances (such as their database connection settings). +- `/aspace/archivesspace/tenants/[tenant name]/config/instance_[hostname].rb` -- The configuration file for a tenant's ArchivesSpace instance running on a particular machine. This allows configuration options to be set on a per-machine basis (for example, setting different ports for different application servers) +- `/aspace/nginx/conf/common/server.conf` -- Global Nginx configuration settings (applying to all tenants) +- `/aspace/nginx/conf/tenants/[tenant name].conf` -- A tenant-specific Nginx configuration file. Used to set the URLs of each tenant's ArchivesSpace instances. + +## Getting started + +We'll assume you already have the following ready to go: + +- Three newly installed machines, each running RedHat (or CentOS) + Linux (we'll refer to these as `loadbalancer`, `apps1` and + `apps2`). +- A MySQL server. +- An NFS volume that has been mounted as `/aspace` on each machine. + All machines should have full read/write access to this area. +- An area under `/aspace.local` which will store instance-specific + files (such as log files and Solr indexes). Ideally this is just + a directory on local disk. +- Java 1.6 (or above) installed on each machine. + +### Populate your /aspace/ directory + +Start by copying the directory structure from `files/` into your +`/aspace` volume. This will contain all of the configuration files +shared between servers: + +```shell +mkdir /var/tmp/aspace/ +cd /var/tmp/aspace/ +unzip -x /path/to/archivesspace.zip +cp -av archivesspace/clustering/files/* /aspace/ +``` + +You can do this on any machine that has access to the shared +`/aspace/` volume. + +### Install the cluster init script + +On your application servers (`apps1` and `apps2`) you will need to +install the supplied init script: + +```shell +cp -a /aspace/aspace-cluster.init /etc/init.d/aspace-cluster +chkconfig --add aspace-cluster +``` + +This will start all configured instances when the system boots up, and +can also be used to start/stop individual instances. + +### Install and configure Nginx + +You will need to install Nginx on your `loadbalancer` machine, which +you can do by following the directions at +http://nginx.org/en/download.html. Using the pre-built packages for +your platform is fine. At the time of writing, the process for CentOS +is simply: + +```shell +wget http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm +rpm -i nginx-release-centos-6-0.el6.ngx.noarch.rpm +yum install nginx +``` + +Nginx will place its configuration files under `/etc/nginx/`. For +now, the only change we need to make is to configure Nginx to load our +tenants' configuration files. To do this, edit +`/etc/nginx/conf.d/default.conf` and add the line: + +``` +include /aspace/nginx/conf/tenants/\*.conf; +``` + +_Note:_ the location of Nginx's main config file might vary between +systems. Another likely candidate is `/etc/nginx/nginx.conf`. + +### Download the ArchivesSpace distribution + +Rather than having every tenant maintain their own copy of the +ArchivesSpace software, we put a shared copy under +`/aspace/archivesspace/software/` and have each tenant instance refer +to that copy. To set this up, run the following commands on any one +of the servers: + +```shell +cd /aspace/archivesspace/software/ +unzip -x /path/to/downloaded/archivesspace-x.y.z.zip +mv archivesspace archivesspace-x.y.z +ln -s archivesspace-x.y.z stable +``` + +Note that we unpack the distribution into a directory containing its +version number, and then assign that version the symbolic name +"stable". This gives us a convenient way of referring to particular +versions of the software, and we'll use this later on when setting up +our tenant. + +We'll be using MySQL, which means we must make the MySQL connector +library available. To do this, place it in the `lib/` directory of +the ArchivesSpace package: + +```shell +cd /aspace/archivesspace/software/stable/lib +wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.24/mysql-connector-java-5.1.24.jar +``` + +## Defining a new tenant + +With our server setup out of the way, we're ready to define our first +tenant. As shown in _Overview of files_ above, each tenant has their +own directory under `/aspace/archivesspace/tenants/` that holds all of +their configuration files. In defining our new tenant, we will: + +- Create a Unix account for the tenant +- Create a database for the tenant +- Create a new set of ArchivesSpace configuration files for the + tenant +- Set up the database + +Our newly defined tenant won't initially have any ArchivesSpace +instances, but we'll set those up afterwards. + +To complete the remainder of this process, there are a few bits of +information you will need. In particular, you will need to know: + +- The identifier you will use for the tenant you will be creating. + In this example we use `exampletenant`. +- Which port numbers you will use for the application's backend, + Solr instance, staff and public interfaces. These must be free on + your application servers. +- If running each tenant under a separate Unix account, the UID and + GID you'll use for them (which must be free on each of your + servers). +- The public-facing URLs for the new tenant. We'll use + `staff.example.com` for the staff interface, and `public.example.com` + for the public interface. + +### Creating a Unix account + +Although not strictly required, for security and ease of system +monitoring it's a good idea to have each tenant instance running under +a dedicated Unix account. + +We will call our new tenant `exampletenant`, so let's create a user +and group for them now. You will need to run these commands on _both_ +application servers (`apps1` and `apps2`): + +```shell +groupadd --gid 2000 exampletenant +useradd --uid 2000 --gid 2000 exampletenant +``` + +Note that we specify a UID and GID explicitly to ensure they match +across machines. + +### Creating the database + +ArchivesSpace assumes that each tenant will have their own MySQL +database. You can create this from the MySQL shell: + +```sql +create database exampletenant default character set utf8; +grant all on exampletenant.* to 'example'@'%' identified by 'example123'; +``` + +In this example, we have a MySQL database called `exampletenant`, and +we grant full access to the user `example` with password `example123`. +Assuming our database server is `db.example.com`, this corresponds to +the database URL: + +``` +jdbc:mysql://db.example.com:3306/exampletenant?user=example&password=example123&useUnicode=true&characterEncoding=UTF-8 +``` + +We'll make use of this URL in the following section. + +### Creating the tenant configuration + +Each tenant has their own set of files under the +`/aspace/archivesspace/tenants/` directory. We'll define our new +tenant (called `exampletenant`) by copying the template set of +configurations and running the `init_tenant.sh` script to set them +up. We can do this on either `apps1` or `apps2`--it only needs to be +done once: + +```shell +cd /aspace/archivesspace/tenants +cp -a \_template exampletenant +``` + +Note that we've named the tenant `exampletenant` to match the Unix +account it will run as. Later on, the startup script will use this +fact to run each instance as the correct user. + +For now, we'll just edit the configuration file for this tenant, under +`exampletenant/archivesspace/config/config.rb`. When you open this file you'll see two +placeholders that need filling in: one for your database URL, which in +our case is just: + +``` +jdbc:mysql://db.example.com:3306/exampletenant?user=example&password=example123&useUnicode=true&characterEncoding=UTF-8 +``` + +and the other for this tenant's search, staff and public user secrets, +which should be random, hard to guess passwords. + +## Adding the tenant instances + +To add our tenant instances, we just need to initialize them on each +of our servers. On `apps1` _and_ `apps2`, we run: + +```shell +cd /aspace/archivesspace/tenants/exampletenant/archivesspace +./init_tenant.sh stable +``` + +If you list the directory now, you will see that the `init_tenant.sh` +script has created a number of symlinks. Most of these refer back to +the `stable` version of the ArchivesSpace software we unpacked +previously, and some contain references to the `data` and `logs` +directories stored under `/aspace.local`. + +Each server has its own configuration file that tells the +ArchivesSpace application which ports to listen on. To set this up, +make two copies of the example configuration by running the following +command on `apps1` then `apps2`: + +```shell +cd /aspace/archivesspace/tenants/exampletenant/archivesspace +cp config/instance_hostname.rb.example config/instance_`hostname`.rb +``` + +Then edit each file to set the URLs that the instance will use. +Here's our `config/instance_apps1.example.com.rb`: + +```ruby +{ + :backend_url => "http://apps1.example.com:8089", + :frontend_url => "http://apps1.example.com:8080", + :solr_url => "http://apps1.example.com:8090", + :indexer_url => "http://apps1.example.com:8091", + :public_url => "http://apps1.example.com:8081", +} +``` + +Note that the filename is important here: it must be: + +``` +instance_[server hostname].rb +``` + +These URLs will determine which ports the application listens on when +it starts up, and are also used by the ArchivesSpace indexing system +to track updates across the cluster. + +### Starting up + +As a one-off, we need to populate this tenant's database with the +default set of tables. You can do this by running the +`setup-database.sh` script on either `apps1` or `apps2`: + +```shell +cd /aspace/archivesspace/tenants/exampletenant/archivesspace +scripts/setup-database.sh +``` + +With the two instances configured, you can now use the init script to +start them up on each server: + +```shell +/etc/init.d/aspace-cluster start-tenant exampletenant +``` + +and you can monitor each instance's log file under +`/aspace.local/tenants/exampletenant/logs/`. Once they're started, +you should be able to connect to each instance with your web browser +at the configured URLs. + +## Configuring the load balancer + +Our final step is configuring Nginx to accept requests for our staff +and public interfaces and forward them to the appropriate application +instance. Working on the `loadbalancer` machine, we create a new +configuration file for our tenant: + +```shell +cd /aspace/nginx/conf/tenants +cp -a \_template.conf.example exampletenant.conf +``` + +Now open `/aspace/nginx/conf/tenants/exampletenant.conf` in an +editor. You will need to: + +- Replace `<tenantname>` with `exampletenant` where it appears. +- Change the `server` URLs to match the hostnames and ports you + configured each instance with. +- Insert the tenant's hostnames for each `server_name` entry. In + our case these are `public.example.com` for the public interface, and + `staff.example.com` for the staff interface. + +Once you've saved your configuration, you can test it with: + + /usr/sbin/nginx -t + +If Nginx reports that all is well, reload the configurations with: + + /usr/sbin/nginx -s reload + +And, finally, browse to `http://public.example.com/` to verify that Nginx +is now accepting requests and forwarding them to your app servers. +We're done! diff --git a/src/content/docs/ja/provisioning/domains.md b/src/content/docs/ja/provisioning/domains.md new file mode 100644 index 0000000..9fa0d3e --- /dev/null +++ b/src/content/docs/ja/provisioning/domains.md @@ -0,0 +1,85 @@ +--- +title: Serving over subdomains +description: How to configure ArchivesSpace and your web server to serve the application over subdomains. +--- + +This document describes how to configure ArchivesSpace and your web server to serve the application over subdomains (e.g., `http://staff.myarchive.org/` and `http://public.myarchive.org/`), which is the recommended +practice. Separate documentation is available if you wish to [serve ArchivesSpace under a prefix](/provisioning/prefix) (e.g., `http://aspace.myarchive.org/staff` and +`http://aspace.myarchive.org/public`). + +1. [Configuring Your Firewall](#Step-1%3A-Configuring-Your-Firewall) +2. [Configuring Your Web Server](#Step-2%3A-Configuring-Your-Web-Server) + - [Apache](#Apache) + - [Nginx](#Nginx) +3. [Configuring ArchivesSpace](#Step-3%3A-Configuring-ArchivesSpace) + +## Step 1: Configuring Your Firewall + +Since using subdomains negates the need for users to access the application directly on ports 8080 and 8081, these should be locked down to access by localhost only. On a Linux server, this can be done using iptables: + +```shell +iptables -A INPUT -p tcp -s localhost --dport 8080 -j ACCEPT +iptables -A INPUT -p tcp --dport 8080 -j DROP +iptables -A INPUT -p tcp -s localhost --dport 8081 -j ACCEPT +iptables -A INPUT -p tcp --dport 8081 -j DROP +``` + +## Step 2: Configuring Your Web Server + +### Apache + +The [mod_proxy module](https://httpd.apache.org/docs/2.4/mod/mod_proxy.html) is necessary for Apache to route public web traffic to ArchivesSpace's ports as designated in your config.rb file (ports 8080 and 8081 by default). + +This can be set up as a reverse proxy in the Apache configuration like so: + +```apache +<VirtualHost *:80> +ServerName public.myarchive.org +ProxyPass / http://localhost:8081/ +ProxyPassReverse / http://localhost:8081/ +</VirtualHost> + +<VirtualHost *:80> +ServerName staff.myarchive.org +ProxyPass / http://localhost:8080/ +ProxyPassReverse / http://localhost:8080/ +</VirtualHost> +``` + +The purpose of ProxyPass is to route _incoming_ traffic on the public URL (public.myarchive.org) to port 8081 of your server, where ArchivesSpace's public interface sits. The purpose of ProxyPassReverse is to intercept _outgoing_ traffic and rewrite the header to match the URL that the browser is expecting to see (public.myarchive.org). + +### nginx + +Using nginx as a reverse proxy needs a configuration file like so: + +```nginx +server { +listen 80; +listen [::]:80; +server_name staff.myarchive.org; +location / { + proxy_pass http://localhost:8080/; + } +} + server { +listen 80; +listen [::]:80; +server_name public.myarchive.org; +location / { + proxy_pass http://localhost:8081/; + } +} +``` + +## Step 3: Configuring ArchivesSpace + +The only configuration within ArchivesSpace that needs to occur is adding your domain names to the following lines in config.rb: + +```ruby +AppConfig[:frontend_proxy_url] = 'http://staff.myarchive.org' +AppConfig[:public_proxy_url] = 'http://public.myarchive.org' +``` + +This configuration allows the staff edit links to appear on the public site to users logged in to the staff interface. + +Do **not** change `AppConfig[:public_url]` or `AppConfig[:frontend_url]`; these must retain their port numbers in order for the application to run. diff --git a/src/content/docs/ja/provisioning/https.md b/src/content/docs/ja/provisioning/https.md new file mode 100644 index 0000000..b02732c --- /dev/null +++ b/src/content/docs/ja/provisioning/https.md @@ -0,0 +1,163 @@ +--- +title: Serving over HTTPS +description: Installing ArchivesSpace in such a manner that all end-user requests are served over HTTPS. +--- + +This document describes the approach for those wishing to install +ArchivesSpace in such a manner that all end-user requests (i.e., URLs) +are served over HTTPS rather than HTTP. For the purposes of this documentation, the URLs for the staff and public interfaces will be: + +- `https://staff.myarchive.org` - staff interface +- `https://public.myarchive.org` - public interface + +The configuration described in this document is one possible approach, +and to keep things simple the following are assumed: + +- ArchivesSpace is running on a single Linux server +- The server is running Apache or Nginx +- You have obtained an SSL certificate and key from an authority +- You have ensured that appropriate firewall ports have been opened (80 and 443). + +1. [Configuring the Web Server](<#Step-1%3A-Configure-Web-Server-(Apache-or-Nginx)>) + - [Apache](#Apache) + - [Setting up SSL](#Setting-up-SSL) + - [Setting up Redirects](#Setting-up-Redirects) + - [Nginx](#Nginx) +2. [Configuring ArchivesSpace](#Step-2%3A-Configure-ArchivesSpace) + +## Step 1: Configure Web Server (Apache or Nginx) + +### Apache + +Information about configuring Apache for SSL can be found at http://httpd.apache.org/docs/current/ssl/ssl_howto.html. You should read +that documentation before attempting to configure SSL. + +#### Setting up SSL + +Use the `NameVirtualHost` and `VirtualHost` directives to proxy +requests to the actual application urls. This requires the use of the `mod_proxy` module in Apache. + +```apache +NameVirtualHost *:443 + +<VirtualHost *:443> + ServerName staff.myarchive.org + SSLEngine On + SSLCertificateFile "/path/to/your/cert.crt" + SSLCertificateKeyFile "/path/to/your/key.key" + RequestHeader set X-Forwarded-Proto "https" + ProxyPreserveHost On + ProxyPass / http://localhost:8080/ + ProxyPassReverse / http://localhost:8080/ +</VirtualHost> + +<VirtualHost *:443> + ServerName public.myarchive.org + SSLEngine On + SSLCertificateFile "/path/to/your/cert.crt" + SSLCertificateKeyFile "/path/to/your/key.key" + RequestHeader set X-Forwarded-Proto "https" + ProxyPreserveHost On + ProxyPass / http://localhost:8081/ + ProxyPassReverse / http://localhost:8081/ +</VirtualHost> +``` + +You may optionally set the `Set-Cookie: Secure attribute` by adding `Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure`. When a cookie has the Secure attribute, the user agent will include the cookie in an HTTP request only if the request is transmitted over a secure channel. + +Users may encounter a warning in the browser's console stating `Cookie “archivesspace_session” does not have a proper “SameSite” attribute value. Soon, cookies without the “SameSite” attribute or with an invalid value will be treated as “Lax”. This means that the cookie will no longer be sent in third-party contexts` (example from Firefox 104) or something similar. Some browsers (for example, Chrome version 80 or above) already enforce this. Standard ArchivesSpace installations should be unaffected, but if you encounter problems with integrations and/or customizations of your particular installation, the following directive may be required: `Header edit Set-Cookie ^(.*)$ $1;SameSite=None;Secure`. Alternatively, it may be the case that `SameSite=Lax` (the default) or even `SameSite=Strict` are more appropriate depending on your functional and/or security requirements. Please refer to https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie/SameSite or other resources for more information. + +#### Setting up Redirects + +When running a site over HTTPS, it's a good idea to set up a redirect to ensure any outdated HTTP requests are routed to the correct URL. This can be done through Apache as follows: + +```apache +<VirtualHost *:80> +ServerName staff.myarchive.org +RewriteEngine On +RewriteCond %{HTTPS} off +RewriteRule (.*) https://staff.myarchive.org$1 [R,L] +</VirtualHost> + +<VirtualHost *:80> +ServerName public.myarchive.org +RewriteEngine On +RewriteCond %{HTTPS} off +RewriteRule (.*) https://public.myarchive.org$1 [R,L] +</VirtualHost> +``` + +### Nginx + +Information about configuring nginx for SSL can be found at http://nginx.org/en/docs/http/configuring_https_servers.html You should read +that documentation before attempting to configure SSL. + +```nginx + +server { + listen 80; + listen [::]:80; + server_name staff.myarchive.org; + return 301 https://staff.myarchive.org; +} + + +server { + listen 443 ssl; + server_name staff.myarchive.org; + charset utf-8; + } + + ssl_certificate /path/to/your/fullchain.pem; + ssl_certificate_key /path/to/your/key.pem + + location / { + allow 0.0.0.0/0; + deny all; + proxy_pass http://localhost:8081; + } +} + +server { + listen 80; + listen [::]:80; + server_name public.myarchive.org; + return 301 https://public.myarchive.org; +} + + +server { + listen 443 ssl; + server_name staff.myarchive.org; + charset utf-8; + } + + ssl_certificate /path/to/your/fullchain.pem; + ssl_certificate_key /path/to/your/key.pem + + location / { + allow 0.0.0.0/0; + deny all; + proxy_pass http://localhost:8080; + } +} + +``` + +## Step 2: Configure ArchivesSpace + +The following lines need to be altered in the config.rb file: + +```ruby +AppConfig[:frontend_proxy_url] = "https://staff.myarchive.org" +AppConfig[:public_proxy_url] = "https://public.myarchive.org" +``` + +These lines don't need to be altered and should remain with their default values. E.g.: + +```ruby +AppConfig[:frontend_url] = "http://localhost:8080" +AppConfig[:public_url] = "http://localhost:8081" +AppConfig[:frontend_proxy_prefix] = proc { "#{URI(AppConfig[:frontend_proxy_url]).path}/".gsub(%r{/+$}, "/") } +AppConfig[:public_proxy_prefix] = proc { "#{URI(AppConfig[:public_proxy_url]).path}/".gsub(%r{/+$}, "/") } +``` diff --git a/src/content/docs/ja/provisioning/index.md b/src/content/docs/ja/provisioning/index.md new file mode 100644 index 0000000..95ea9e7 --- /dev/null +++ b/src/content/docs/ja/provisioning/index.md @@ -0,0 +1,15 @@ +--- +title: Provisioning and server configuration +description: The index to the provisioning section of the ArchivesSpace techinal documentation. +--- + +- [Running ArchivesSpace with load balancing and multiple tenants](./clustering.html) +- [Serving ArchivesSpace over subdomains](./domains.html) +- [Serving ArchivesSpace user-facing applications over HTTPS](./https.html) +- [JMeter Test Group Template](./jmeter.html) +- [Running ArchivesSpace against MySQL](./mysql.html) +- [Application monitoring with New Relic](./newrelic.html) +- [Running ArchivesSpace under a prefix](./prefix.html) +- [robots.txt](./robots.html) +- [Running ArchivesSpace with external Solr](./solr.html) +- [Tuning ArchivesSpace](./tuning.html) diff --git a/src/content/docs/ja/provisioning/jmeter.md b/src/content/docs/ja/provisioning/jmeter.md new file mode 100644 index 0000000..0373a4d --- /dev/null +++ b/src/content/docs/ja/provisioning/jmeter.md @@ -0,0 +1,13 @@ +--- +title: JMeter Test Group Template +description: How to create a Jmeter Test Group. +--- + +## Creating a test group: + +Load the file 'example_test_plan.jmx' into JMeter and make sure the following are true for the example to run successfully: + +- The backend is running on localhost port 4567 +- There is at least one repository, and its url is /repositories/2 + +The example will log in to the backend, store the session key as a JMeter variable, and make two basic requests, one of which will require a session key. diff --git a/src/content/docs/ja/provisioning/mysql.md b/src/content/docs/ja/provisioning/mysql.md new file mode 100644 index 0000000..8ba110a --- /dev/null +++ b/src/content/docs/ja/provisioning/mysql.md @@ -0,0 +1,89 @@ +--- +title: Using MySQL +description: Instructions for how to set up MySQL with ArchivesSpace. +--- + +Out of the box, the ArchivesSpace distribution runs against an +embedded database, but this is only suitable for demonstration +purposes. When you are ready to starting using ArchivesSpace with +real users and data, you should switch to using MySQL. MySQL offers +significantly better performance when multiple people are using the +system, and will ensure that your data is kept safe. + +ArchivesSpace is currently able to run on MySQL version 5.x & 8.x. + +## Download MySQL Connector + +ArchivesSpace requires the +[MySQL Connector for Java](http://dev.mysql.com/downloads/connector/j/), +which must be downloaded separately because of its licensing agreement. +Download the Connector and place it in a location where ArchivesSpace can +find it on its classpath: + +```shell +$ cd lib +$ curl -Oq https://repo1.maven.org/maven2/com/mysql/mysql-connector-j/9.1.0/mysql-connector-j-9.1.0.jar +``` + +Note that the version of the MySQL connector may be different by the +time you read this. + +## Set up your MySQL database + +Next, create an empty database in MySQL and grant access to a dedicated +ArchivesSpace user. The following example uses username `as` +and password `as123`. + +**NOTE: WHEN CREATING THE DATABASE, YOU MUST SET THE DEFAULT CHARACTER +ENCODING FOR THE DATABASE TO BE `utf8`.** This is particularly important +if you use a MySQL client to create the database (e.g. Navicat, MySQL +Workbench, phpMyAdmin, etc.). + +<!-- This is also true of MySQL 8 in general... --> + +**NOTE: If using AWS RDS MySQL databases, binary logging is not enabled by default and updates will fail.** To enable binary logging, you must create a custom db parameter group for the database and set the `log_bin_trust_function_creators = 1`. See [Working with DB Parameter Groups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html) for information about RDS parameter groups. Within a MySQL session you can also try `SET GLOBAL log_bin_trust_function_creators = 1;` + +```shell +$ mysql -uroot -p + +mysql> create database archivesspace default character set utf8mb4; +Query OK, 1 row affected (0.08 sec) +``` + +If using MySQL 5.7 and below: + +```sql +mysql> grant all on archivesspace.* to 'as'@'localhost' identified by 'as123'; +Query OK, 0 rows affected (0.21 sec) +``` + +If using MySQL 8+: + +```sql +mysql> create user 'as'@'localhost' identified by 'as123'; +Query OK, 0 rows affected (0.08 sec) + +mysql> grant all privileges on archivesspace.* to 'as'@'localhost'; +Query OK, 0 rows affected (0.21 sec) +``` + +Then, modify your `config/config.rb` file to refer to your MySQL +database. When you modify your configuration file, **MAKE SURE THAT YOU +SPECIFY THAT THE CHARACTER ENCODING FOR THE DATABASE TO BE `UTF-8`** as shown +below: + +```ruby +AppConfig[:db_url] = "jdbc:mysql://localhost:3306/archivesspace?user=as&password=as123&useUnicode=true&characterEncoding=UTF-8" +``` + +There is a database setup script that will create all the tables that +ArchivesSpace requires. Run this with: + +```shell +scripts/setup-database.sh # or setup-database.bat under Windows +``` + +You can now follow the instructions in the "Getting Started" section to start +your ArchivesSpace application. + +\*\*NOTE: For MySQL 8. MySQL 8 uses a new method (caching_sha2_password) as the default authentication plugin instead of the old mysql_native_password that MySQL 5.7 and older used. This may require starting a MySQL 8 server with the `--default-authentication-plugin=mysql_native_password` option. You may also be able to change the auth mechanism on a per user basis by logging into mysql and running `ALTER USER 'as'@'localhost' IDENTIFIED WITH mysql_native_password BY 'as123';`. Also be sure to have the LATEST [MySQL Connector for Java](http://dev.mysql.com/downloads/connector/j/) from MySQL in your /lib/ directory for ArchivesSpace. diff --git a/src/content/docs/ja/provisioning/newrelic.md b/src/content/docs/ja/provisioning/newrelic.md new file mode 100644 index 0000000..49ff283 --- /dev/null +++ b/src/content/docs/ja/provisioning/newrelic.md @@ -0,0 +1,40 @@ +--- +title: Application monitoring with New Relic +description: Instructions for how to set up New Relic for application monitoring on ArchivesSpace. +--- + +[New Relic](http://newrelic.com/) is an application performance monitoring tool (amongst other things). + +**To use with ArchivesSpace you must:** + +- Signup for an account at newrelic (there is a free tier and paid plans) +- Edit config.rb to: + - activate the `newrelic` plugin + - add the New Relic license key + - add an application name to identify the ArchivesSpace instance in the New Relic dashboard + +For example, in config.rb: + +```ruby +## You may have other plugins +AppConfig[:plugins] = ['local', 'newrelic'] + +AppConfig[:newrelic_key] = "enteryourkeyhere" +AppConfig[:newrelic_app_name] = "ArchivesSpace" +``` + +- Install the New Relic agent library by initializing the plugin: + +```shell +## For Linux/OSX +$ scripts/initialize-plugin.sh newrelic + +## For Windows +% scripts\initialize-plugin.bat newrelic +``` + +- Start, or restart ArchivesSpace to pick up the configuration. + +Within a few minutes the application should be visible in the New Relic dashboard with data being collected. + +--- diff --git a/src/content/docs/ja/provisioning/prefix.md b/src/content/docs/ja/provisioning/prefix.md new file mode 100644 index 0000000..d0ddc38 --- /dev/null +++ b/src/content/docs/ja/provisioning/prefix.md @@ -0,0 +1,64 @@ +--- +title: Proxy prefix +description: Instructions for serving each user-facing ArchivesSpace application under a prefix rather than as its own subdomain. +--- + +**Important Note: Prefixes do NOT work properly in versions between 2.0.1 and 2.2.2** + +This document describes a simple approach for those wishing to deviate from the recommended +practice of running each user-facing ArchivesSpace application on its own subdomain, and instead +serve each application under a prefix, e.g. + +``` +http://aspace.myarchive.org/staff +http://aspace.myarchive.org/public +``` + +This configuration described in this document is one possible approach, +and to keep things simple the following are assumed: + +- ArchivesSpace is running on a single Linux server +- The server is running the Apache 2.2+ webserver + +Unless otherwise stated, it is assumed that you have root access on +your machines, and all commands are to be run as root (or with sudo). + +## Step 1: Setup proxies in your Apache configuration + +The following edits can be made in the httpd.conf file itself, or in an included file: + +```apache +ProxyPass /staff http://localhost:8080/staff +ProxyPassReverse /staff http://localhost:8080/ +ProxyPass /public http://localhost:8081/public +ProxyPassReverse /public http://localhost:8081/ +``` + +Now restart Apache. + +## Step 2: Install and configure ArchivesSpace + +Follow the instructions in the main README to download and install ArchivesSpace. + +Open the file `archivesspace/config/config.rb` and add the following lines: + +```ruby +AppConfig[:frontend_proxy_url] = 'http://aspace.myarchive.org/staff' +AppConfig[:public_proxy_url] = 'http://aspace.myarchive.org/public' +``` + +(Note: These lines should NOT begin with a '#' character.) + +Start ArchivesSpace. + +## Step 3: (Optional) Lock down ports 8080 and 8081 + +By default, the staff and public applications are accessible on ports 8080 and 8081 + +``` +http://aspace.myarchive.org:8080 +http://aspace.myarchive.org:8081 +``` + +Since these are not the URLs at which users should access the application, you will probably +want to close them off. See README_HTTPS for more information on closing ports using iptables. diff --git a/src/content/docs/ja/provisioning/robots.md b/src/content/docs/ja/provisioning/robots.md new file mode 100644 index 0000000..702522a --- /dev/null +++ b/src/content/docs/ja/provisioning/robots.md @@ -0,0 +1,45 @@ +--- +title: robots.txt +description: Instructions for adding a robots.txt to your ArchivesSpace site. +--- + +The easiest way to add a `robots.txt` to your site is simply create +one in your `/config/` directly. This file will be served as a standard +`robots.txt` file when you start your site. + +If you're not able to do that, you can use a seperate file and your proxy. + +For Apache the config would look like this: + +```apache +<Location "/robots.txt"> + SetHandler None + Require all granted +</Location> +Alias /robots.txt /var/www/robots.txt +``` + +nginx, more like this: + +```nginx +location /robots.txt { + alias /var/www/robots.txt; +} +``` + +You may also add robots meta-tags to your `layout_head.html.erb` to be included in the header area of your site. + +example: + +`<meta name="robots" content="noindex, nofollow">` + +A sensible starting point for a `robots.txt` file looks something like this: + +``` +Disallow: /search* +Disallow: /inventory/* +Disallow: /collection_organization/* +Disallow: /repositories/*/top_containers/* +Disallow: /check_session* +Disallow: /repositories/*/resources/*/tree/* +``` diff --git a/src/content/docs/ja/provisioning/solr.md b/src/content/docs/ja/provisioning/solr.md new file mode 100644 index 0000000..84845d0 --- /dev/null +++ b/src/content/docs/ja/provisioning/solr.md @@ -0,0 +1,205 @@ +--- +title: External Solr +description: Instructions for installing and using external Solr with ArchivesSpace. +--- + +:::note +For ArchivesSpace > 3.1.1, external Solr is **required**. For previous versions it is optional. +::: + +## Supported Solr Versions + +See the [Solr requirement notes](/administration/getting_started#solr) + +## Install Solr + +Refer to the [Solr documentation](https://solr.apache.org/guide/solr/latest/) for instructions on setting up Solr on your server. + +You will download the Solr package and extract it to a folder of your choosing. Do not start Solr +until you have added the ArchivesSpace configuration files. + +**We strongly recommend a standalone mode installation. No support will be provided for Solr +Cloud deployments specifically (i.e. we cannot help troubleshoot Zookeeper).** + +## Create a configset + +Before running Solr you will need to +setup a [configset](https://solr.apache.org/guide/8_10/config-sets.html#configsets-in-standalone-mode). + +### Create a new directory + +#### Linux + +Using the command line: + +```shell +mkdir -p /$path/$to/$solr/server/solr/configsets/archivesspace/conf +``` + +Be sure to replace `/$path/$to/$solr` with your actual Solr location, which might be something like: + +```shell +mkdir -p /opt/solr/server/solr/configsets/archivesspace/conf +``` + +#### Windows + +Right click on your Solr directory and open in Windows Terminal (Powershell). + +``` +mkdir -p .\server\solr\configsets\archivesspace\conf +``` + +You should see something like this in response: + +``` +Directory: C:\Users\archivesspace\Projects\solr-8.10.1\server\solr\configsets\archivesspace +Mode LastWriteTime Length Name +---- ------------- ------ ---- +d----- 10/25/2021 12:15 PM conf +``` + +### Copy the config files + +Copy the ArchivesSpace Solr configuration files from the `solr` directory included +in the zip file release into the `$SOLR_HOME/server/solr/configsets/archivesspace/conf` directory. + +There should be four files: + +- schema.xml +- solrconfig.xml +- stopwords.txt +- synonyms.txt + +```shell +ls .\server\solr\configsets\archivesspace\conf\ + +Directory: C:\Users\archivesspace\Projects\solr-8.10.1\server\solr\configsets\archivesspace\conf + +Mode LastWriteTime Length Name +---- ------------- ------ ---- +-a---- 10/25/2021 12:18 PM 18291 schema.xml +-a---- 10/25/2021 12:18 PM 3046 solrconfig.xml +-a---- 10/25/2021 12:18 PM 0 stopwords.txt +-a---- 10/25/2021 12:18 PM 0 synonyms.txt +``` + +_Note: your exact output may be slightly different._ + +## Setup the environment + +When using Solr v9 or later, the use of [Solr modules](https://solr.apache.org/guide/solr/latest/configuration-guide/solr-modules.html) is required. +We recommend using the environment variable option to specify the modules to use: + +```shell +SOLR_MODULES=analysis-extras +``` + +This environment variable needs to be available to the Solr instance at runtime. + +For instructions on how set an environment variable here are some recommended articles: + +- When using [linux](https://www.freecodecamp.org/news/how-to-set-an-environment-variable-in-linux) +- When using a [mac](https://phoenixnap.com/kb/set-environment-variable-mac) +- When using [windows](https://docs.oracle.com/cd/E83411_01/OREAD/creating-and-modifying-environment-variables-on-windows.htm#OREAD158). Note that on windows, the variable name should be: `SOLR_MODULES` and the variable value: `analysis-extras` + +## Setup a Solr core + +With the `configset` in place run the command to create an ArchivesSpace core: + +```bash +bin/solr start +``` + +Wait for Solr to start (running as a non-admin user): + +```shell +.\bin\solr start +"java version info is 11.0.12" +"Extracted major version is 11" +OpenJDK 64-Bit Server VM warning: JVM cannot use large page memory because it does not have enough privilege to lock pages in memory. +Waiting up to 30 to see Solr running on port 8983 +Started Solr server on port 8983. Happy searching! +``` + +You can check that Solr is running on [http://localhost:8983](http://localhost:8983). + +Now create the core: + +```shell +bin/solr create -c archivesspace -d archivesspace +``` + +You should see confirmation: + +```shell +"java version info is 11.0.12" +"Extracted major version is 11" + +Created new core 'archivesspace' +``` + +In the browser you should be able to access the [ArchivesSpace schema](http://localhost:8983/solr/#/archivesspace/files?file=schema.xml). + +## Disable the embedded server Solr instance (optional <= 3.1.1 only) + +Edit the ArchivesSpace config.rb file: + +```ruby +AppConfig[:enable_solr] = false +``` + +Note that doing this means that you will have to backup Solr manually. + +## Set the Solr url in your config.rb file + +This config setting should point to your Solr instance: + +```ruby +AppConfig[:solr_url] = "http://localhost:8983/solr/archivesspace" +``` + +If you are not running ArchivesSpace and Solr on the same server, update +`localhost` to your Solr address. + +By default, on startup, ArchivesSpace will check that the Solr configuration +appears to be correct and will raise an error if not. You can disable this check +by setting `AppConfig[:solr_verify_checksums] = false` in `config.rb`. + +Please note: if you're upgrading an existing installation of ArchivesSpace to use an external Solr, you will need to trigger a full re-index. +See [Indexes](/administration/indexes) for more details. + +--- + +You can now follow the instructions in the [Getting started](/administration/getting_started) section to start +your ArchivesSpace application. + +--- + +## Upgrading Solr + +If you are using an older version of Solr than is recommended you may need (if called out +in release notes) or want to upgrade. Before performing an upgrade it is recommended that you review: + +- [Solr upgrade notes](https://solr.apache.org/guide/solr/latest/upgrade-notes/solr-upgrade-notes.html) +- [ArchivesSpace's release notes](https://github.com/archivesspace/archivesspace/releases) + +You should also review this document as the installation steps may include +instructions that were not present in the past. For example, from Solr v9 there is a +requirement to use Solr modules with instructions to configure the modules using environment +variables. + +The crucial part will be ensuring that ArchivesSpace's schema is being used for the +ArchivesSpace Solr index. The config setting `AppConfig[:solr_verify_checksums] = true` +will perform a check on startup that confirms this is the case, otherwise ArchivesSpace +will not be able to start up. + +From ArchivesSpace 3.5+ `AppConfig[:solr_verify_checksums]` does not check the +`solrconfig.xml` file. Therefore you can make changes to it without ArchivesSpace failing +on startup. However, for an upgrade you will want to at least compare the ArchivesSpace +`solrconfig.xml` to the one that is in use in case there are changes that need to be made to +work with the upgraded-to version of Solr. For example the ArchivesSpace Solr v8 `solrconfig.xml` +will not work as is with Solr v9. + +After upgrading Solr you should trigger a full re-index. Instructions for this are in +[Indexes](/administration/indexes). diff --git a/src/content/docs/ja/provisioning/tuning.md b/src/content/docs/ja/provisioning/tuning.md new file mode 100644 index 0000000..b36f9f2 --- /dev/null +++ b/src/content/docs/ja/provisioning/tuning.md @@ -0,0 +1,51 @@ +--- +title: Performance tuning +description: Guidance for performance tuning of the ArchivesSpace stack. +--- + +ArchivesSpace is a stack of web applications which may require special tuning in order to run most effectively. This is especially the case for institutions with lots of data or many simultaneous users editing metadata. +Keep in mind that ArchivesSpace can be hosted on multiple server, either in a [multitenant setup](/provisioning/clustering) or by deploying the various applications ( i.e. backend, frontend, public, solr, & indexer ) on separate servers. + +## Application Settings + +The application itself can tuned in numerous ways. It’s a good idea to read the [configuration documentation](/customization/configuration), as there are numerous settings that can be adjusted to fit your needs. + +An important thing to note is that since ArchivesSpace is a Java application, it’s possible to set the memory allocations used by the JVM. There are numerous articles on the internet full of information about what the optimal settings are, which will depend greatly on the load your server is experiencing and the hardware. It’s a good idea to monitor the application and ensure that it’s not hitting the top limits what you’ve set as the heap. + +These settings are: + +- ASPACE_JAVA_XMX : Maximum heap space ( maps to Java’s Xmx, default "Xmx1024m" ) +- ASPACE_JAVA_XSS : Thread stack size ( maps to Xss, default "Xss2m" ) +- ASPACE_GC_OPTS : Options used by the Java garbage collector ( default : "-XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:NewRatio=1" ) + +To modify these settings, Linux users can either export an environment variable ( e.g. $ export ASPACE_JAVA_XMX="Xmx2048m" ) or edit the archivesspace.sh startup script and modify the defaults. + +Windows users must edit the archivesspace.bat file. + +If you're having trouble with errors like `java.lang.OutOfMemoryError` try doubling the `ASPACE_JAVA_XMX`. On Linux you can do this either by setting an environment variable like `$ export ASPACE_JAVA_XMX="Xmx2048m"` or by editing archivsspace.sh: + +```shell +if [ "$ASPACE_JAVA_XMX" = "" ]; then + ASPACE_JAVA_XMX="-Xmx2048m" +fi +``` + +For Windows, you'll change archivesspace.bat: + +```shell +java -Darchivesspace-daemon=yes %JAVA_OPTS% -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:NewRatio=1 -Xss2m -X +mx2048m -Dfile.encoding=UTF-8 -cp "%GEM_HOME%\gems\jruby-rack-1.1.12\lib\*;lib\*;launcher\lib\*!JRUBY!" org.jruby.Main "la +uncher/launcher.rb" > "logs/archivesspace.out" 2>&1 +``` + +**NOTE: THE APPLICATION WILL NOT USE THE AVAILABLE MEMORY UNLESS YOU SET THE MAXIMUM HEAP SIZE TO ALLOCATE IT** For example, if your server has 4 gigs of RAM, but you haven’t adjusted the ArchivesSpace settings, you’ll only be using 1 gig. + +## MySQL + +The ArchivesSpace application can hit a database server rather hard, since it’s a metadata rich application. There are many articles online about how to tune a MySQL database. A good place to start is try something like [MySQL Tuner](http://mysqltuner.com/) or [Tuning Primer](https://rtcamp.com/tutorials/mysql/tuning-primer/) which can give good hints on possible tweaks to make to your MySQL server configuration. + +Keep a close eye on the memory available to the server, as well as your InnoDB buffer pool. + +## Solr + +The internet is full of many suggestions on how to optimize a Solr index. [Running an external Solr index](/provisioning/solr) can be beneficial to the performance of ArchivesSpace, since that moves the index to its own server. diff --git a/src/content/docs/ja/release-notes/v4.0.0.md b/src/content/docs/ja/release-notes/v4.0.0.md new file mode 100644 index 0000000..3324b7b --- /dev/null +++ b/src/content/docs/ja/release-notes/v4.0.0.md @@ -0,0 +1,89 @@ +--- +title: v4.0.0 +--- + +## ArchivesSpace v4.0.0 Release Summary + +Major technical infrastructure upgrades and user interface improvements characterize this release. Key changes include: + +## Breaking Changes + +- **Breaking change**: [OAI identifiers now use colon separator between the namespace and identifier](#api-and-integration-updates) +- **Breaking change**: [Solr 9 now required](#major-infrastructure-updates) +- **Breaking change**: [the Sequence module has been removed from core ArchivesSpace](#plugins-and-configuration) + +## Major Infrastructure Updates + +- **Breaking change**: Solr 9 now required +- Upgraded to newer versions of: + - Bootstrap (4.3) + - jQuery (3.7.0) + - Rails (6.1.6) + - JRuby (9.3.x.x) + - Nokogiri (1.13.10) + - Sequel (5.9.0) +- Frontend and public development web server migrated from Jetty to Puma (6.4.2) +- Staff application CSS migrated from Less to Sass +- Java 8 no longer supported - requires Java 11 or 17 +- Docker now supported as recommended deployment method + +## Public User Interface Improvements + +- Collection organization sidebar can now be configured for left/right positioning in config.rb +- New information and options for large finding aids + - Displays percentage of loaded records in infinite scroll + - Option to load all children for a resource at once (vs infinite scroll) +- Search terms now highlighted in results +- Fixed bug causing extra lines in notes display +- Change PDF label from "Print" to "Download PDF" +- PDF uses Kurinto fonts by default +- Improved hyperlink display in classification descriptions + +## Staff Interface Enhancements + +- Bulk updater plugin now part of core application +- New ability to duplicate full resource or archival object records +- Enhanced spreadsheet importers + - Added new fields for digital objects to bulk Digital Object spreadsheet + - Location imports can include an owner repository + - Archival Object CSV imports now respect publication status + - New option to download partially completed digital object spreadsheet template +- Fixed agent merge preview page +- Improved staff plugins dropdown in repository settings +- Fixes to the Rapid Data Entry modal +- Fixed tooltip bugs +- Improved Jobs status layouts + +## EAD Export Changes + +- More fields have special character escaped +- Removed commas and period from langmaterial notes +- Leading XML tags in Revision Description will no longer cause invalid XML + +## Documentation and Testing + +- Launched new technical documentation site at docs.archivesspace.org +- Ported all Selenium tests to Capybara +- Added functionality for test failure screenshots + +## API and Integration Updates + +- **Breaking change**: OAI identifiers now use colon separator between the namespace and identifier + +## Security and Administration + +- New config.rb option to allow users with the Administrator role to access the system information page +- Added config.rb option for favicon display +- PUI PDFs will now include clearer error messages when generation fails +- Enhanced bulk import/update capabilities with new configuration options + +## Plugins and Configuration + +- **Breaking change**: the Sequence module has been removed from core ArchivesSpace + +## Community Contributions + +- 76 community contributions accepted +- 134 Pull Requests merged +- 146 Jira Tickets closed +- Contributions from multiple community members and organizations