Kotlin Project Model Design Documentation 1.0-master Help

Open design question


Previous design


  • We need a grouping abstraction for cases like “main/test/integrationTest/benchmarks”

  • We do not know whether we need this abstraction for other cases

  • We do not how to call this abstraction properly, and whether it’s a part of fundamental core model (like in “module bundles” + modules) or it’s a small details for convenience (some additional entities inside modules)

One proposal is: make modules serve a single purpose: production / test / benchmark / integration-test. Express the dependencies between such modules as module dependencies, which are naturally decomposed into fragment dependencies. Keep refinement available only between fragments inside one module – this seems to solve the case with main-test refinement. If necessary, refinement between the modules can work with #Open Expects

The other proposal is: add something like “variant bundles” with identifiers inside modules; make the “main” variant bundle the default one in dependency resolution.

Internal visibility (and associated compilations?)


Make Kotlin Attributes an implementation detail?

Current usages of Kotlin attributes are limited to the following areas:

  • Regulating “compatible” and “non-compatible” dependencies:

    • JVM can not refine JS

    • JVM can not have a fragment dependency on JS

  • Helping to infer visibility

    • module dependency from P.jvmAndJs to module M is expanded into P.jvmAndJs → M.jvmAndJs because they have compatible attributes

    • alternative wording: P.jvmAndJs depends on M and sees M.jvm variant and M.js variant, because they are attributes-compatible with P.jvmAndJs

    • In any way, there’s a thinking like “I write this dependency on a module, I expect to see this and that, because here we must say something about attributes or a similar mechanic

  • Affecting analyzer settings (e.g. JVM checkers)

Can we detach the platform-dimension from attributes?

It is possible, and cases above can bee reformulated without contradictions.

However, there are left cases which then have the very same nature, but somehow should find different wording:

  • jvmTest can not refine jvmMain: with attributes we can say that they just have incompatible attribute values ‘
    • given that jvmTest actually depends on the main code and should not normally provide actuals for expects from the main code, we may exclude this case from the scope of attributes and state that jvmTest cannot refine jvmMain because of something else (e.g. the fragments being in different modules main and test?)

  • depending from macosDebug on some module should lead to seeing the similar macosDebug

Refinement outside of “platform” dimension

There’s a desirable cluster of cases when one might want to use expect/actual mechanism for purposes other than cross-platform code sharing.

  • Sharing code between main and test: essentially, a compiler-assisted mocking/service-loading (declare service as expect, declare production implementation in main and mock impl in test)

  • Sharing code between debug and release, for example, some logs/assertions might be more verbose under debug and no-op or lightweights under release via different actual implementation

  • Sharing code across various product flavours, like paid/demo

The lingering caveat here is that main and tests might seem very similar to other mentioned cases, but in fact they are different. Let us show why.

No matter what, the following must hold: code with two actuals for one and the same expect should never be attempted to be executed. That is pretty reasonable as the executor won’t be able to decide which actual implementation should be called.

Now, observe that most of the cases above naturally don’t attempt to do so: you either execute debug code or release code, paid code or demo code, but not both at the same time. But that isn’t true for main and test. Tests are naturally call a lot of production code, and so they are executed is the same runtime environment.

This is what makes main/test refinement different from other sorts. This means that some dimensions are “refinement-friendly” while others are “refinement-hostile”

We currently don’t have a mental model which can express this thought.

Dependency scopes in KPM


Single-platform projects as special case in KPM

It is obviously desired to show that single-platform projects can be expressed in terms of KPM, thus showing that they are just a specific case of a multiplatform project (possibly, expressing them with complete power of KPM is overkill).

This is a TODO section for that explanation.

Architecture and protocol of communicating with the compiler facade – “compilation requests”

The communication between the build system and the compiler facade may employ a series of requests to “compile”, “link”, “build” something etc. These requests might not be uniform. For example, requests to link a binary might need different handling (inputs, outputs, arguments) than requests to compile a KLIB.

Inferring all of the available “build actions” from the project model may require complicated logic and probably should be done by the compiler facade based on the model description. Then the compiler facade may “tell” the build tool about the possible build actions and accept just the build action ID (plus, maybe, something about where to put the outputs).

Possible kind of build actions:

  • compile fragment

  • link variant binary

  • build the “composite” multiplatform library artifact

  • ...

If we want more granular build action requests (e.g. the build system takes care about some up-to-data checks and only wants us to recompile what it thinks is out-of-date) then probably the build system will have to pass the compiler facade the output locations for previous/relevant/all build actions.

It may look like this:

"for buildActionId=compileFragmentFoo the (single) output location is build/classes/foo; for buildActionId=linkLinuxSharedLibrary the output locations are sharedLib =build/binaries/linux, headers=build/headers/linux, ..."

Compiler facade optimisation stages

API. Adding compilation granularity.

It is detrimental for UX to always perform full build of MPP library: e.g. if one wants to launch JVM tests, there’s no need to compile all K/N targets.

Let’s fix that by specifying the fragment which should be compiled:

class CompilationRequest { val compiledModule: KotlinModule val requestedFragment: KotlinFragment val outputPath: File }

Note that it means that to compile a complete Kotlin MPP Module (e.g. to preparer it for publication), build system has to repeatedly send multiple CompilationRequests iterating over all source-sets, and then build the expected layout of the published Kotlin Module, accounting for all specifics (e.g. how we pack together symbolic metadata for various fragments).

This is certainly an issue, as the goal of Compiler Facade is to encapsulate such specifics inside.

For that, introduce a separate API endpoint, let’s call it AssembleRequest, which expects as input paths to which previous CompilationRequests compiled, and then assembles those outputs together into one output (potentially consisting of multiple files) which represents to-be-published layout 2

class AssembleRequest { val fragmentOutputs: Collection<File> // 1 val outputPath: File }

API. Adding dependencies downloading granularity

TODO this and other optimisation stages

Last modified: 20 May 2021