مدونة hos.lyric🐇

المدونات

Tags
None

Solution Sketches for Elegia's Forgotten Hill

2025-07-28 11:36:46 By hos_lyric

https://qoj.ac/contest/1814 $\newcommand\d{\mathrm{d}}$ $\newcommand\e{\mathrm{e}}$

  • Work in progress!
  • Decent amount of this blog is not my original idea, but reiteration of Elegia's (or other's) idea in my words.
  • I skip mentioning everywhere $o(n \log(n)^2)$ time online convolution like practical $O(\frac{n \log(n)^2}{\log(\log(n))})$ ($\log(n)$-ary tree) or theoretical $O(n \log(n) \e^{2\sqrt{\log(2)\log(\log(n))}})$.

10932 彩色弹珠

Sweep by $l$, maintaining a segment tree for $r$. $O(n \log(n))$ time.

298 狗

For each value, maintain maximal segments by std::set. Regarding such segment as a point $(l, r)$, it becomes a 2D query problem (point addition, rectangle sum). BIT on BIT, $O((n+q) \log(n+q)^2)$ time.

299 梨

Obvious minimum cost flow formutation has $O(n)$ vertices. Use a segment tree shape to reduce the number of edges to $O(n + m \log(n))$.

300 锅

Do FFT every $B$ queries. The contribution of a modification to a position can be taken in $O(1)$ time. $O(\frac{q}{B} n \log(n) + qB)$ time, which is $O(q \sqrt{n \log(n)})$ for $B = \Theta(\sqrt{n \log(n)})$.

841 树状数组

Move the smaller as $n \mapsto n + \mathrm{lowbit}(n)$ until they meet. $O(\log(\max\{x,y\}))$ time.

842 精灵之环

One $c$-cycle in a permutation $f$ becomes $\gcd(c,k)$ $\frac{c}{\gcd(c,k)}$-cycles in $f^k$. For each $d$, the set $\{ c \mid \frac{c}{\gcd(c,k)} = d \}$ has the property that the minimum element $c_0$ divides every element. So the number of $d$-cycles in $f^k$ must be divisible by $\frac{c_0}{d}$, and we can combine them in groups of $\frac{c_0}{d}$ $d$-cycles.

843 道路规划

Anyway, for fixed $k$, the answers for $n$ satisfies a linear recurrence. Run your favorite DP and Berlekamp-Massey.

Let the Laplacian matrix of $m$-vertex path graph be $L_m$. Let the eigenvalues of $L_k$ and $L_n$ be $\alpha_0 = 0, \alpha_1, \ldots, \alpha_{k-1}$ and $\beta_0 = 0, \beta_1, \ldots, \beta_{n-1}$, respectively. The Laplacian matrix of $k \times n$ grid graph is the Kronecker sum $L_k \oplus L_n$, so the answer is $\frac{1}{kn} \prod_{(i,j)\ne(0,0)} (\alpha_i + \beta_j)$. It reduces to compute the resultant $\mathrm{res}(\det(xI-L_k) / x, \det((-x)I-L_n))$.

We just need $\det((-x)I-L_n) \bmod (\det(xI-L_k) / x)$, and $\det((-x)I-L_n)$ satisfies a linear recurrence. $O(\mathrm{poly}(k) \log(n))$ time.

844 空间切割

I don't have a proof... Each set of $i$ hyperplanes in general position ($0 \le i \le k$) has a $(k-i)$-dimentional intersection, and it adds one region (compared to when there is no intersection). The answer is $\sum_{0\le i\le k} \binom{n}{i}$.

845 御坂网络

Usual rerooting DP works. It is also easy to compute the difference when we move the root to an adjacent vertex.

846 传送门

We can use segment tree shapes to reduce the number of edges to $O(n + m \log(n))$.

847 游戏

Contribution of $f$ is linear, namely the probability of visiting a particular value. Precompute those by DP.

848 组合数

For $n \ge 0$, $f(n+1,m) = 2 f(n,m) - \binom{n}{m}$.

849 瘟疫公司

First compute all $w$ in $O(m \log(m) + 2^n m \alpha(n))$ time.

Then DP. In step $k$, handle all transitions $S \subsetneq T$ with $|S| = k$. One zeta-transform in each step suffices. $O(2^n n^2)$ time.

903 彼岸蔷薇

See [353 人造情感]. Here the weight is $1$, so we want to count paths such that $f_*$ on the path is all $0$ when LCA is the root. So we have the same goal and only the last additional simple DP differs.

274 复读 / 复读机

Fix the vertex $u$ at the end of the first period. It is useless to end outside the given subtree, so there are $n$ candidates.

It is useless to visit below $u$ in the first period: We can cut and shift the moves below $u$ up to the root and merge the trees. By repeating this you get a tree with $u$ being a leaf. Then the cost is $2 \cdot (\text{the number of edges}) - \mathrm{depth}(u)$. To obtain this tree in $O(n)$ time, we can add a leaf one by one, which grows the tree at most by one.

273 混乱度

Write $a_i = \sum_e a_i^{(e)} p^e$ in base $p$. We have $f(l, r) \equiv \prod_e [a_l^{(e)} + \cdots + a_r^{(e)} < p] \frac{(a_l^{(e)} + \cdots + a_r^{(e)})!}{a_l^{(e)}! \cdots a_r^{(e)}!} \pmod{p}$. Fix $l$, then there are at most $(p-1) \lfloor \log_p(\max a_i) + 1 \rfloor$ interesting intervals for $r$. We should have the list of nonzero $a_i^{(e)}$s to compute in $O(1)$ time per interval.

1691 极地科考

Intervals of length $\ge 2k$ are useless, as we can cut and choose the better. For each length in $[k, 2k)$, maintain a segment tree. $O(k (n + m \log(n)))$ time.

1692 梦中的项链

Apply inclusion-exclusion on the conditions of adjacent gems being different. It becomes a cycle of chains, or a whole cycle. The latter case is easy. For the former case, the generating function for a chain is $f = \sum_{k,l} a_k (-1)^{l-1} x^{kl}$. Then a cycle of chains is represented by $\frac{x f'}{1-f}$ (mark one gem from one chain as a start of the cycle). $O(n \log(n))$ time.

6204 押韵

The answer is $[\frac{x^{nd}}{(nd)!}] \left(\sum_{d\mid i} \frac{x^i}{i!}\right)^k$. Letting $\zeta$ be the primitive $d$-th root of unity (which exists in $\mathbb{F}_{1049874433}$), it becomes $\frac{1}{d^k} \left( \sum_{0\le j< d} \e^{\zeta^j x} \right)^k$. Letting $u = \e^x,\, v = \e^{\zeta x}$, for $d=1,2,3,4,6$, $\sum_{0\le j< d} \e^{\zeta^j x} = u, u+v, u+v+u^{-1}v^{-1}, u+v+u^{-1}+v^{-1}, u+v+u^{-1}v+u^{-1}+v^{-1}+uv^{-1}$ respectively. We can expand $g = f^k$ in $O(k^2)$ time using $f g' = k f' g$. Then we can take $[x^{nd}]$ in $O(k^2 \log(nd))$ time (or $k^2 \log(1049874433)$).

It is possible to speed up the last part, though I haven't implemented. Using a sieve for rational integers, Gaussian integers, or Eisenstein integers, we need actual power computation only for $O(k^2 / \log(k))$ irreducible elements.

6205 观察星象

We can assume that one of the $n$ points lie on the circle. Binary search on the answer, and invert with respect to that point. The circle becomes a line, with freedom in the direction. The condition to contain a point is represented by a certain interval of the direction. So we can check if $\ge m$ intervals intersect, in $O(n \log(n))$ time.

Seems $O(n^2 \log(n) \log(\epsilon))$ time overall, but if we check the point in random order we can reduce it to expected $O(n^2 \log(n) + n \log(n)^2 \log(\epsilon))$.

348 斑斓星

Keeping the vertex with the minimum degree doesn't make the answer better than the whole graph. So just repeatedly remove vertex with the minimum degree.

349 彩绘

The contribution of an element is, if there are $l$ elements to its left and $r$ elements to its right, $\frac{1}{(l+r)! 2^{l+r}} \frac{(l+r)!}{l!r!} (2l-1)!! (2r-1)!!$. This is because the merges on the left side and on the right side are actually independent, and on the left side there are $(l-1)$ ways to be unchanged and $1$ way to get halved.

350 旅行诗

DP to get down+exit costs, and then binary lifting for up+DP. $O((n+m) \log(n))$ time.

351 直至世界化作灰烬

The length $y$ must be one less or equal to the length of $x$, try both.

The problem is roughly about finding coefficients of $10^i + 10^{L-1-i}$, in $[0, 18]$. In fact this can be uniquely determined from the outermost one: The bottom digit tells modulo $10$, and the top digits limit the range size to $2$.

352 生存战略

Use a linked list. Each node has $4$ links to adjacent cells in counterclockwise order (but possibly rotated), along with the difference of the values.

In each query follow the links (starting from an edge of the matrix) $O(n)$ times to locate the boundary of the query rectangle and their neighbors and find with how they are rotated, which are the nodes we need to update.

353 人造情感

First let's compute $W(S)$ in a clean way. An obvious solution is to make the tree rooted and let $\mathrm{dp}_u$ be the answer for the subtree at $u$ and only considering paths lying in the subtree. If we don't use a path with LCA $u$, the score is $\sum_{\text{$v$: child of $u$}} \mathrm{dp}_v$. If we use a path with LCA $u$, the score is its weight plus the sum over the separate subtrees.

We introduce $f_u := \mathrm{dp}_u - \sum_{\text{$v$: child of $u$}} \mathrm{dp}_v$. We can see the recurrence becomes $f_u = \max\left\{ 0, \max_{\text{path with LCA $u$}} \left( (\text{weight of the path}) - \sum_{\text{$x$ on the path, $x \ne u$}} f_x \right) \right\}$ and we have $W(S) = \sum_u f_u$. This also helps us understand $f(u,v)$ in the problem statement: if the LCA of this path is the root (especially if $u$ is the root), $f(u,v)$ is the sum of $f_*$ over the path. So our goal is to find all $f_u$ for each root (an additional simple DP to finish).

The bottom-up phase (finding all $f_u$ for a fixed root) is reduce to standard path sum queries, $O(n \log(n))$ time.

The top-down phase is tough. Let $v$ be a child of $u$ (for the fixed root) and consider when rerooted so that $v$ becomes the parent of $u$. We might as well consider an imaginary leaf from $u$. We denote the rerooted version of $f_u$ by $F_v$, and use $f_*$ for the fixed root. A path with original LCA $l$ (for the fixed root) becomes to have LCA $u$ when:

  1. $l = u$, and the path lies in $\mathrm{subtree}(u) \setminus \mathrm{subtree}(v)$.
  2. $l$ is a strict ancestor of $u$, and the path comes from above $u$ and goes to $\mathrm{subtree}(u) \setminus \mathrm{subtree}(v)$.

We need to take the maximum of the contributions of those paths. The contribution is the sum of the following:

  • the weight of the path
  • the sum of $-f_*$ over the path
  • the sum of $f_x - F_x$ for $x$ on the path between $l$ and $u$, excluding $l$, including $u$.

Here the last term can be maintained as an offset during the top-down DFS, so we can treat the contribution like a static value.

Finally how to take the maximum. For the case (1.), each path with $l = u$ becomes unavailable for at most two $v$'s. We can for example use an erasable priority queue at each $u$. For the case (2.), when the top-down DFS goes below $l$ in a direction of the path, insert it to a global data structure (a segment tree) indexed by the DFS order of the endpoint in that direction, and remove it when it comes back. This insertion happens at most twice per path, and for each $(u,v)$ we ask an interval query at most twice. $O(n \log(n))$ time overall.

1929 U 群把妹王

Inclusion-exclusion: Fix a partition $\mathcal{X}$ of rows and $\mathcal{Y}$ of columns ($[n] = \bigsqcup_{X\in\mathcal{X}} X, [m] = \bigsqcup_{Y\in\mathcal{Y}} Y$), count colorings such that if $x,x' \in X \in \mathcal{X}$ then rows $x, x'$ have the same pattern and if $y,y' \in Y \in \mathcal{Y}$ then columns $y, y'$ have the same pattern (there are $k^{|\mathcal{X}| |\mathcal{Y}|}$ of them), and add it to the answer with a certain coefficient.

Consider the coefficients for the rows. If there are exactly $r$ rows of a pattern, then we want to count it $[r \in \{0\} \cup S]$ times. So we can set the weight of $\mathcal{X}$ to $\prod_{X\in\mathcal{X}} f_{|X|}$ for $f$ satisfying $\exp\left( \sum_{r\ge0} f_r \frac{x^r}{r!} \right) = 1 + \sum_{r\in S} \frac{x^r}{r!}$. We want to find the sum of weights for $|\mathcal{X}| = i$, which is $F_i := [\frac{x^n}{n!}] \frac{\log\left( 1 + \sum_{r\in S} \frac{x^r}{r!} \right)^i}{i!}$. Finding $F_0, \ldots, F_n$ is exactly the power projection problem, thus solvable in $O(n \log(n)^2)$ time. If we use Newton iteration to find the compositional inverse, it is $O(|S| n \log(n))$ time.

After doing the same for columns to get $G_0, \ldots, G_m$, the answer is $\sum_{i,j} F_i G_j k^{ij}$. Use $k^{ij} = k^{\binom{i+j}{2}} - k^{\binom{i}{2}} - k^{\binom{j}{2}}$ to compute it in $O((n+m) \log(n+m))$ time.

143 指数公式

The problem statement asks EGF $\exp$ modulo $2$. Although $\frac{1}{i!}$ does not always exist in small characteristic, we can define EGF as a formal notation and its operations such as multiplication, multiplicative inverse, differential, $\log$, $\exp$.

In characteristic $2$, EGF multiplication (or binomial convolution) is actually a subset convolution. For $[x^0] f = 1$, we see $f^2 = 1$ so $f^{-1} = f$ and $(\log(f))' = f'/f = f' f$. Note that if $f$ is a polynomial and $\deg(f) < 2^m$, then $\log(f)$ is also polynomial, $\deg(\log(f)) \le 2^m$ and $[x^{2^m}] \log(f) = [x^{2^{m-1}}] f$.

For $g = \exp(f)$ ($[x^0] f = 0, [x^0] g = 1$), apply Newton iteration to solve $\log(g) - f = 0$. The iteration becomes $g \gets g - \frac{1/g}{\log(g) - f} = g + g (\log(g) + f)$. This gives an $O(n \log(n)^2)$ time algorithm.

In $\mathbb{F}_2$, we can speed it up to $O(n \log(n)^2 / w)$ ($w$ is the word size; it actually means $n \log_2(n) \lceil \log_2(n)/w \rceil$) by encoding a polynomial by an integer (bitset). We can maintain the coefficients in ranked-zeta-transformed form during Newton iteration. We could make use of CLMUL instruction for polynomial multiplication, though there still remains a bottleneck of zeta transform.

1928 Slime and Sequences

There is a bijection between good sequences and permutations: given a good sequence, read indices of $1$ from right to left, read indices of $2$ from right to left, and so on. In this way, the value in the good sequence increments when there is an ascent in the permutation. Counting for each position in the permutation, we see the $k$-th answer is $\sum_{1\le m\le n} A(m,k) \frac{n!}{m!}$, where $A$ denotes the Eulerian numbers.

We have $\mathrm{ans}(y) = n! [x^n] \frac{1}{1-x} \frac{1-y}{1-y\e^{x(1-y)}}$. First we substitute $z = x(1-y)$ (This might not look essential but it makes it much easier to see what we should do next), getting $\mathrm{ans}(y) = n! (1-y)^{n+2} [z^n] \frac{1}{(1-y-z) (1-y\e^z)}$.

We apply partial fraction decomposition with respect to $y$, getting $\mathrm{ans}(y) = n! (1-y)^{n+2} [z^n] \left( \frac{B(z)}{1-y-z} + \frac{C(z)}{1-y\e^z} \right)$ where $B(z) = \frac{1}{1-(1-z)\e^z}$ and $C(z) = \frac{-\e^z}{1-(1-z)\e^z}$ and we regard them as formal Laurent series starting from $z^{-2}$'s term. Now we have two problems.

For $B(z)$'s term, we have $[z^n] \frac{B(z)}{1-y-z} = [z^n] B(z) \sum_{i\ge0} (1-y)^{-(i+1)} z^i$ so after multiplying with $(1-y)^{n+3}$ we get a polynomial in $(1-y)$, so a single convolution gives us a polynomial in $y$.

For $C(z)$'s term, we use Lagrange inversion. Letting $f(z) = \e^z - 1, g(t) = f^{\langle -1 \rangle}(t) = \log(1+t), h(t,y) = \frac{C(g(t))}{1-y(1+t)}$, we have $[z^n] \frac{C(z)}{1-y\e^z} = [t^n] h(t,y) (t/g(t))^{n+1} g'(t)$. We can write it as $[t^n] \frac{D(t)}{1-y-yt} = [t^n] D(t) \sum_{i\ge0} y^i (1-y)^{-(i+1)} t^i$ where $D(t)$ is a formal Laurent series. Again, after multiplying with $(1-y)^{n+3}$ we can get a polynomial in $y$ with a single convolution (and some reversal).

Finally add these two and divide it by $(1-y)$ (divisible). $O(n \log(n))$ time overall.

13459 带加强和多项木

The generating function for a tree $f$ satisfies $f = x + \sum_{d\in D} f^d$. Letting $g = y - \sum_{d\in D} y^d$, the answer is $[x^n] f = [y^{n-1}] (y/g)^{n+1} g'$.

We want to find it modulo a prime power $p^e$. For power series $h(y)$, it is well-known that $h(y)^p \equiv h(y^p) \pmod{p}$. From this, we can prove $h(y)^{p^e} \equiv h(y^p)^{p^{e-1}} \pmod{p^e}$. So if $h(y)$ is invertible and $a = \sum_{i\ge0} a_i p^i$ is the $p$-adic expansion of an integer, we have $h(y)^a \equiv h(y)^{a_0} \cdots h(y)^{a_e p^{e-2}} h(y)^{a_{e-1} p^{e-1}} h(y^p)^{a_e p^{e-1}} h(y^{p^2})^{a_e p^{e-1}} \cdots \pmod{p^e}$.

For $h(y) = g/y$ (polynomial), we precompute $h(y)^{b p^i}$ for $i = 0,\ldots,e-1$ and $b=0,\ldots,p-1$ in $O(p^{2e} K^2)$ time (assuming the naive multiplication). For each query, maintain the answer of the form $[h^m] w(y) h(y)^a$. First process the $(-(n+1)) \bmod p^e$ part in $O(p^{2e-1} K^2)$ time. After that, we repeatedly multiply some $h(y)^{b p^{e-1}}$ and take degrees at some remainder modulo $p$, until $m$ becomes $0$. This part costs $O(p^{2e-1} K^2 \log_p(n))$ time (each step is $O(p^e K)$ size multiplication, but we only need every $p$-th term of the product).

415 王的象棋世界

Reflection. Consider columns to be cyclic modulo $(2C+2)$. Put a starting point of weight $+1$ at column $a$, and of weight $-1$ at column $-a$, so that DP values on columns $0$ and $C+1$ become $0$.

Precompute $(x^{-1} + x^0 + x^1)^{R-1} \bmod (x^{2C+2} - 1)$ in $O(C \log(C) \log(R))$ time, and then $O(1)$ time per query.

1642 圆环之理

Let $D_n$ be the dihedral group of order $2n$. Apply inclusion-exclusion: for each subgroup $G \le D_n$, count the number $f(G)$ of paintings fixed by elements of $G$, and then the answer is obtained by a Möbius transform on the subgroup lattice of $D_n$: $\mathrm{ans} = \sum_{G\le D_n} \mu(1, G) f(G)$.

Let $D_n = \langle r, s \mid r^n = s^2 = (rs)^2 = 1 \rangle$ ($r$ for rotation, $s$ for reflection). There are two types of subgroups:

  • $\langle r^d \rangle$ for $d \mid n$
  • $\langle r^d, r^i s \rangle$ for $d \mid n$ and $0 \le i < d$

To find $f(G)$, it suffices to count the orbits of vertices and of segments. Classify the segments by their arc length $l$ ($1 \le l \le n/2$), which is an invariant for rotation and reflection. By careful casework we can see there are $O(1)$ types of orbits. We need to check parity of $n, d, i, i-l, \gcd(d,n/2)$.

We have $\mu(1, \langle r^d \rangle) = \mu(n/d)$ and $\mu(1, \langle r^d, r^i s \rangle) = -(n/d) \mu(n/d)$. It is easy to inductively calculate or verify these values using $\sum_{1\le H\le G} \mu(1, H) = [1=G]$. Hence, we only need to compute the values for square-free $n/d$, so $O(2^{\omega(n)} \log(n))$ time overall.

401 因子统计 (Ex)

Let $N = \max n$. For small prime $p \le N^{1/2}$, compute $v_p\left(\binom{n}{m}\right) = v_p(n!) - v_p(m!) - v_p((n-m)!)$ in $O(\log_p(n))$ time for each query. This is $O(q N / \log(N))$ in total.

For large prime $p > N^{1/2}$, we have $v_p\left(\binom{n}{m}\right) = \lfloor \frac{n}{p} \rfloor - \lfloor \frac{m}{p} \rfloor - \lfloor \frac{n-m}{p} \rfloor \in \{0,1\}$, so we can count such $p$ in $O(1)$ time if we can get $\sum_{p>N^{1/2}} \lfloor \frac{n}{p} \rfloor$ for each $n$. Precomputing this using prefix sums takes $O(N + \sum_{p>N^{1/2}} \frac{N}{p}) = O(N)$ time.

Theoretically faster solutions are possible if we handle $N^{1/3} < p \le N^{1/2}$ with a suitable data structure.

13347 新年的复读机

There exists an optimal solution where we start from an element and repeatedly merge it with an adjacent element. Proof: Consider the tree representing the merges and assume there exists ((a b) (c d)) structure. Replacing with (((a b) c) d) or (a (b (c d))) does not make the cost larger (Evaluate the sum of the two and use $\gcd(a,b,c) \le \gcd(a,b), \gcd(b,c,d) \le \gcd(c,d)$).

There exists an optimal solution where if we extend the current interval to the left, we do so until $\gcd$ strictly decreases or reach the leftmost, and the same for the right. This can be proven by rearranging the merges from the ending. With this assumption, computing the minimum cost starting from $[l, r]$, with memoization, needs $O(n \log(\max a_i))$ calls: This is because $\gcd$ of $[l, r]$ is strictly smaller than $[l+1, r]$ (which happens $\log$ times for each $r$) or than $[l, r-1]$ (which happens $\log$ times for each $l$).

Creating the lists of $\gcd$ changing points for each $l$ and for each $r$ takes $O(n \log(\max a_i))$ time with an appropriate $\gcd$ method. Locating $[l, r]$ in these lists in $O(1)$ time per call needs subtle care, but $O(\log(\log(\max a_i)))$ time is fast enough.

13348 新年的追逐战

Let's see how $G_1 \times G_2$ looks like. It is the disjoint union of $C_1 \times C_2$ over $(C_1, C_2)$ where $C_i$ is a connected component of $G_i$.

  • If at least one of $C_1$ and $C_2$ is an isolated point, then $C_1 \times C_2$ consists only of isolated points.
  • Otherwise, if both $C_1$ and $C_2$ are bipartite, then $C_1 \times C_2$ consists of two bipartite connected components.
  • Otherwise, if exactly one of $C_1$ and $C_2$ is bipartite, then $C_1 \times C_2$ is bipartite connected.
  • Otherwise, $C_1 \times C_2$ is non-bipartite connected.

So there are only $3$ states to consider: isolated, non-isolated bipartite, non-bipartite.

What we need is to count the number of graphs with a marked (bipartite) connected component. This is a standard EGF $\log$ practice.

776 小鸣的疑惑

The answer is $a_1$ if $n=1$ and $a_1 - a_2$ if $n\ge2$. We can prove it by induction.

A non-inductive (?) way to prove: Consider the contribution of $a_i$. Operations on the right side is irrelevant. If we regard operations on the left side as a permutation on $[i-1]$, each suffix maximum negates the value. There is a famous bijection to relate the number of suffix maximums with the number of cycles, so the total is represented by the sum of parity of all permutations.

777 红蔷薇白玫瑰

The goal uniquely determines the start, so check each vertex as the goal. We can use hash or KMP (with quick links).

778 山遥路远

Let $f(u,v)$ the minimum length of a valid $s-t$ walk. There are $3$ cases: $u = v$, $f(u,w)+f(w,v)$, ( $f(u',v')$ ). If we prepare the intermediate states to represent we have followed ( but not ) yet, These can be computed in $O(n^3 \log(n))$ time like Dijkstra. This is fast enough to pass, but if we use a priority queue supporting faster decrease-key, it will be $O(n^3)$ time (In this problem, an $n$-ary tree suffices).

About an upper bound on the answer: Since the optimal walk for $f(u,v)$ does not depend recursive on $f(u,v)$, the depth of the bracket sequence cannot exceed $n^2$. Another view of the problem is to consider (vertex, depth) states from the beginning of the walk, this gives an upper bound $n^3$ on the number of edges in the optimal walk.

779 命运歧途

Inclusion-exclusion. Say there are $m$ numbers with a certain remainder modulo $k$, let $[x^i] f_m(x)$ be the sum of weights ($\pm1$) of ways to form $i$ chains and line them up. Then the answer for $k$ is $\sum_i i! [x^i] f_{\lfloor n/k \rfloor}(x)^{k - (n \bmod k)} f_{\lfloor n/k \rfloor + 1}(x)^{(n \bmod k)}$. We have $[x^i] f_m(x) = [y^m] \left(y + \sum_{j\ge2} 2 (-1)^{j-1} y^j\right)^i = [y^m] \left(\frac{y-y^2}{1+y}\right)^i$, so it is easy to compute all needed $[x^i] f_m(x)$ in $O(n^2)$ time.

Computing all needed powers of $f_m(x)$ by FFT takes $O(n^2 \log(n)^2)$ time, and this is enough with fast implementation.

We make use of the fact $f_1(x) = x + O(x^2)$ and $f_m(x) = 2x + O(x^2)$ for $m \ge 2$. When the modulus $M$ is odd, we can apply the differential method: Let $h = f^a g^b$, then $h' = h (a f'/f + b g'/g)$. We can find the coefficients $i! [x^i] h, i! [x^i] h f', i! [x^i] h f'/f, i! [x^i] h g', i! [x^i] h g'/g$ in parallel, without divisions (except by $2$). This works in $O(n (\deg(f) + \deg(g))$ time, so computing for all $k$ takes $O(n^2 \log(n))$ time.

It remains to deal with modulo $2^e$. Take $l = O(e)$ such that $2^e \mid l!$ and just compute the first $l$ terms. We can achieve $O(l^2 n \log(n))$ time by binary exponentiation or precomputing needed powers.

780 生活在树上

We must decrease $\sum_u \mathrm{dist}(u, p_u)$ by $2$ in every operation. Use a priority queue to maintain the edges available for a swap (once pushed, it remains available until popped). After an operation, we can go through all adjacent edges naively (a value contributes $O(n)$ times overall). $O(n^2 \log(n))$ time.

781 三千世界

See [353 人造情感]. We compute the answer as $\sum_u f_u$. We can do DP by letting $g_{u,k}$ be the probability that, with in the subtree at $u$, the size of the connected component with $f_* = 0$ containing $u$ is $k$ ($k = 0$ means $f_u = 1$). $k$ is bounded by the subtree size, and merging information is similar to convolution, so we can achieve $O(n^2)$ time.

782 大过滤器

We want to represent a cost as $(a, b)$, where the actual cost is $c_a + b$ for some array $c$. We do $(a, b) + 1 = (a, b+1)$, $(a, b) \times 2 = (a, 2b)$, and the comparison by lexicographic order. After finding costs inside a layer, if there are $(a, b)$ and $(a, b')$ such that $b + n < b'$, then can re-assign them different $a$-values because $2b+n + n < 2b', 2(2b+n)+n < 4b', \ldots$. So we do something like compression after each layer, to keep $a = O(n), b = O(n^2)$, and maintain $c_a \bmod 998244353$.

783 碎梦

By linearity, it reduces to compute the coefficients of $(x, y) \to (x', y')$ for $O(k^2)$ pairs. It only depends on $x'-x, y, y'$, so assume $(r, r) \to (r+n, r+k)$. From the recurrence, its combinatorial meaning is the number of partitions of $[r+n]$ into $(r+k)$ non-empty sets such that elements $1,\ldots,r$ are in different sets. So we can compute it as $[\frac{x^n}{n!}] (\e^x)^r \frac{(\e^x - 1)^k}{k!}$ in $O(k \log(n))$ time. This is a special case of $r$-Stirling number.

To speed up, we can precompute $(x_{i+1} - x_i)^j$ for each $i, j$. $O(k \log_m(n) + k^2 m)$ time.

2102 生命游戏

$T$-th power of a permutation on $[2^{16}]$.

2103 装备升级

Split into two groups according to $w_{i,1} \le w_{i,2}$ or $w_{i,1} > w_{i,2}$. For each group and for each $k$, we want to compute the minimum cost to get $k$ levels.

  • For $w_{i,1} \le w_{i,2}$, convexity holds, just sort $w_{i,j}$ in increasing order.
  • For $w_{i,1} > w_{i,2}$, it is easy to see that there exists an optimal solution where there is at most one $i$ to use $w_{i,1}$ but not $w_{i,2}$. Sort by $w_{i,1} + w_{i,2}$ and do DP with $O(1)$ states.

It is even possible to find the answer for each $m$ in $O(n \log(n))$ time, since the combining is min-plus convolution with one operand being convex.

2104 因子统计

See [401 因子统计 (Ex)].

6198 三维立体混元劲

Let $f(x_1, \ldots, x_k)$ be the multivariate EGF such that $[\frac{x_1^{i_1}}{i_1!} \cdots \frac{x_k^{i_k}}{i_k!}] f$ is the number of graphs on $i_1 + \cdots + i_k$ labeled vertices ($i_l$ vertices from group $l$). Computing coefficients of $f$ in $O(N k \log(N))$ time is easy (Binary exponentiation is fast enough as $\sum_l \log(n_l) \le \log(N)$). The answer is $[\frac{x_1^{n_1}}{n_1!} \cdots \frac{x_k^{n_k}}{n_k!}] \log(f)$.

Since this is the problem which introduced to us how to perform multivariate series operations modulo $(x_1^{n_1+1}, \ldots, x_k^{n_k+1})$, I summarize it here.

First consider the multiplication. We encode $(i_1, \ldots, i_k)$ by $i = i_1 + n_1 i_2 + \cdots + (n_1 \cdots n_{k-1}) i_k$ and define $\chi(i) = \lfloor \frac{i}{n_1} \rfloor + \lfloor \frac{i}{n_1n_2} \rfloor + \cdots + \lfloor \frac{i}{n_1\cdots n_{k-1}} \rfloor$. Then we can see $i + j$ has no "carry" iff $\chi(i+j) \equiv \chi(i) + \chi(j) \pmod{k}$. So we replace $x_1^{i_1} \cdots x_k^{i_k}$ with $x^i t^{\chi(i)}$, multiply two series modulo $(x^N, y^k - 1)$, then take terms of $x^i t^{\chi(i)}$ to get the correct product. Since $k \le \log_2(N)$, it is fine to use naive multiplication on $t$ to achieve $O(N k \log(N))$ time.

For the multiplicative inverse, Newton method works as usual.

For $\log$, we want an operator $D$ satisfying linearity and Leibniz rule, to derive $D (\log(f)) = \frac{D f}{f}$. Here $D = x_1 \frac{\d}{\d x_1} + n_1 x_2 \frac{\d}{\d x_2} + \cdots + (n_1 \cdots n_{k-1}) x_k \frac{\d}{\d x_n}$ is useful because $D x^i = i x^i$.

13343 新年的军队

Instead of permutations and the answer $a_1,\ldots,a_n$, we consider uniform random real sequence in $[0,1]^n$ with $m$ descents and the probability density function $b(t)$ of the $k$-th element. Their relation is $b(t) = \sum_{l=1}^n a_l \dfrac{t^{l-1} (1-t)^{n-l}}{(l-1)!(n-l)!}$ and this is essentially a binomial transform, so if we find the coefficients of $b$, we can get $a_1,\ldots,a_n$ in $O(n \log(n))$ time.

Let $f(t)$ be the probability density function of the last element, with indeterminate $x$ counting the elements except the last, and indeterminate $y$ counting the descents, so we have $f = 1 + x \int_0^t \d t f + xy \int_t^1 \d t f$. We get a differential equation $f' = x f - xy f$ and initial conditions $f(0) = 1 + xy \int_0^1 \d t f$ and $f(1) = 1 + x \int_0^1 \d t f$, which gives $f(t) = \dfrac{(1-y) \e^{x(1-y)t}}{1 - y \e^{x(1-y)}}$. The probability density function of the first element is $f(1-t)$ by symmetry.

Thus we have $b(t) = [y^m] \left([x^{k-1}] f(t)\right) \left([x^{n-k}] f(1-t)\right)$. Expand the denominators to get $b(t) = [y^m] (1-y)^{n+1} \left( \sum_{i\ge0} y^i \frac{(t+i)^{k-1}}{(k-1)!} \right) \left( \sum_{j\ge0} y^j \frac{(1-t+j)^{n-k}}{(n-k)!} \right)$. We evaluate it at $t = 0,\ldots,n-1$, so $b(t) = \frac{1}{(k-1)!(n-k)!} [y^m] (1-y)^{n+1} \sum_{i\ge t,j\ge1-t} y^{i+j-1} i^{k-1} j^{n-k}$. It is almost the same convolution for all $t$ and only the added range differs. We can get $b(0)$ by a convolution, and then each $b(t+1)-b(t)$ with precomputed convolutions $(1-y)^{n+1} \sum_i y^i i^{k-1}$ and $(1-y)^{n+1} \sum_j j^{n-k}$.

The bottleneck is the polynomial interpolation, $O(n \log(n)^2)$ time.

13339 你将如闪电般归来

TODO

13335 哈希杀手

Lagrange interpolation is $H(x) = \sum_i H(q^i) \prod_{0\le j< n,j\ne i} \frac{x-q^j}{q^i-q^j}$. We aim to compute $[x^m] \prod_{0\le j< n,j\ne i} \frac{x-q^j}{q^i-q^j}$ for each of the $k$ given $i$s (This should be possible in the same time complexity due to transposition principle, although it might become harder). We reverse the coefficient (This is not essential. It corresponds to considering $\mathbb{F}_p[[x^{-1}]]$ instead of $\mathbb{F}_p[[x]]$, but both are fine as long as we stick to one) and let $l = n-1-m$. We also use the $q$-Pochhammer notation $(x;q)_n = \prod_{0\le j< n} (1-q^jx)$. So the answer for $i$ is $\dfrac{[x^l] \prod_{0\le j< n,j\ne i} (1-q^jx)}{q^{\binom{i}{2}} q^{i(n-1-i)} (-1)^i (q;q)_i (q;q)_{n-1-i}}$.

Consider the numerator. We build its generating function $f(x,y) = \sum_{i,j\ge0} f_{i,j} x^i y^j = \sum_{i\ge0} y^i \prod_{0\le j< n,j\ne i} (1-q^jx) = (x;q)_n \sum_{i\ge0} \frac{y^i}{1-q^ix}$. As a typical method for $q$-analogues, we examine $f(qx,y)$ and $f(x,qy)$ (Note that only using $f(x,qy)$ to derive a recurrence in $i$ leads to a trivial result):

  • $y f(qx,y) = \frac{1-q^Nx}{1-x} f(x,y) - \frac{(qx;q)_n}{1-x}$
  • $x f(x,qy) = f(x,y) - \frac{(x;q)_n}{1-y}$

We want to obtain relations within $[x^l] f(x,y)$ so we eliminate $x f(-,-)$. From the first formula we have $(1-x)qy f(qx,qy) = (1-q^Nx) f(x,qy) - (qx;q)_n$, applying the second to it we have $qy f(qx,qy) - y f(qx,y) + \frac{y (qx;q)_n}{1-y} = f(x,qy) - q^n f(x,y) + \frac{q^n (x;q)_n}{1-y}$. Take $[x^l y^i]$ for $i \ge 1$ and simplify to get the recurrence: $f_{l,i} = \dfrac{(q^{l+i}-q^l) f_{l,i-1} + (q^l-q^n) [x^l] (x;q)_n}{q^i-q^n}$. The initial value is $f_{l,0} = [x^l] (qx;x)_{n-1}$.

We first consider how to query $f_{l,i}$ $k$ times, assuming we have $[x^l] (x;q)_n$ and $f_{l,0}$. Let's fix a block size $B$ and precompute $f_{l,0}, f_{l,B}, f_{l,2B}, \ldots$. We can write $f_{l,i+B} = \dfrac{a(q^i) f_{l,i} + b(q^i)}{c(q^i)}$ for some polynomials $a(z),b(z),c(z)$ which does not depend on $i$. Computing these polynomial takes $O(B \log(B)^2)$ time by divide-and-conquer, evaluating them at $z = q^0, q^B, q^{2B}, \ldots$ takes $O((B + \frac{n}{B}) \log(B + \frac{n}{B}))$ time by using $q^{ij} = q^{\binom{i+j}{2}-\binom{i}{2}-\binom{j}{2}}$, and then queries are answered in $O(kB)$ (and one division per query). Selecting $B = \Theta(n^{1/2} \min\{ \log(n)^{-1/2}, k^{-1/2} \log(n) \})$ gives $O(n^{1/2} (\log(n)^{3/2} + k^{1/2} \log(n)^{1/2}))$ time (though we should be careful about constant factors).

The remaining task is to compute $(q;q)_i$ for $O(k)$ $i$'s, as the $q$-binomial theorem states $[x^l] (x;q)_n = \frac{(q;q)_n}{(q;q)_l (q;q)_{n-l}} q^{\binom{l}{2}} (-1)^l$. Using the recurrence $(q;q)_i = (q;q)_{i-1} (1-q^i)$, we can apply the same method as above. The divide-and-conquer part can be replaced with expanding $(zq;q)_B$ using the $q$-binomial theorem in $O(B)$ time.

1603 被 EI 加 0 了

Let $f_{n,m}$ be the answer when every $1, \ldots, m$ must occur at least once. The problem essentially asks us to compute $f_{n,m}$ for all $1 \le m \le n \le N$ to answer each query in $O(N)$ time.

Regard the condition as $m$ conditions: For each $b \in [1,m]$, there does not exist $1 \le i < j \le n$ such that $\max\{ a_1,\ldots,a_i \} = b = \min\{ a_j,\ldots,a_n \}$. We can assume $i, j$ to be the positions of the first and the last occurrence of $b$, respectively.

Apply inclusion-exclusion on these conditions and consider inserting $1, \ldots, N$ in order. We can run DP by letting $(i,j)$ denote the state where the length is $i+j$, and we cannot insert larger values before the $i$-th element. The transitions when inserting $b$ are:

  • No condition for $b$: $(i,j) \to (i,j+k)$ ($k \ge 1$), weight $\binom{j+k}{k}$.
  • Negation of the condition for $b$: $(i,j) \to (i'+1,j'+k+1)$ ($i+j = i'+j', j \ge j' \ge 0, k \ge 0$), weight $-\binom{j'+k}{k}$. Note that $b$ should occur $\ge 2$ times.

We can consider $(i,j) \to (i+1,j-1)$ before the second transition to achieve $O(N^4)$ time, and can use FFT for $j$'s dimension to achieve $O(N^3 \log(N))$ time.

Since the convolution for $j$'s dimension is binomial, we consider EGF ($\frac{x^j}{j!}$). The transitions become:

  • $(i,j) \to (i,j+k)$ ($k \ge 1$): $f \mapsto f(\e^x-1)$.
  • $(i,j) \to (i+1,j-1)$: $f \mapsto \frac{\d}{\d x} f$
  • $(i,j) \to (i+1,j+k+1)$ ($k \ge 0$): $f \mapsto \int \d x (f \e^x)$

We can see that the DP values are linear combinations of $e^{cx}$ for $0 \le c \le b$ after we insert $1, \ldots, b$. Taking the answer is now non-trivial, but $(i,j) \to (i+1,j-1)$ is useful again so we only need $[x^0]$. This runs in $O(N^3)$ time.

998 矩阵归零

Let $[z^t] f(z)$ be the probability that $t$ operations change the matrix from the given initial state to zero, $[z^t] g(z)$ be similar but zero to zero, and $[z^t] h(z)$ be the probability that from the initial state the matrix becomes zero in $t$ operations for the first time. We have $h(z) = \frac{f(z)}{g(z)}$ and the answer is $h'(1)$.

Let $[z^t] F_{a,b}(z)$ be the probability that $t$ operations flip $a$ rows odd times and $b$ columns odd times, so that $f(z) = F_{a,b}(z) + F_{n-a,m-b}(z)$ for certain $a, b$ and $g(z) = F_{0,0}(z) + F_{n,m}(z)$. First, consider the EGF for rows: $\cosh(\frac{z}{n})^{n-a} \sinh(\frac{z}{n})^a$. We can write is as $\sum_{0\le i\le n} p_{a,i} \e^{\frac{n-2i}{n} z}$, and similarly for columns $\sum_{0\le j\le m} q_{b,j} \e^{\frac{m-2j}{m} z}$, then $F_{a,b}(z) = \sum_{0\le i\le n} \sum_{0\le j\le m} p_{a,i} q_{b,j} \dfrac{1}{1 - \frac{n-2i}{n} \frac{m-2j}{m} z}$.

We cannot just say $h'(1) = \frac{f'(1) g(1) - f(1) g'(1)}{g(1)^2}$ because $F_{a,b}(z)$ has a term of $\frac{1}{1-z}$. We instead use $h(z) = \frac{(1-z) f(z)}{(1-z) g(z)}$ and can confirm $\left. (1-z) g(z) \right|_{z=1} \ne 0$. Note that we refer to Abel's theorem to rigorously justify the substition $z=1$.

It remains to compute $\left. (1-z) F_{a,b}(z) \right|_{z=1}$ and $\left. \frac{\d}{\d z} \bigl((1-z) F_{a,b}(z)\bigr) \right|_{z=1}$. The former is $p_{a,0} q_{b,0} + p_{a,n} q_{b,m}$. The latter is $\sum_{(i,j)\ne(0,0),(n,m)} p_{a,i} q_{b,j} \dfrac{-1}{1 - \frac{n-2i}{n} \frac{m-2j}{m}}$, which boils down to multipoint evaluation of a rational function $\sum_j q_{b,j} \frac{-1}{1 - \frac{m-2j}{m} w}$ on points $w = \frac{n-2i}{n}$. $O((n+m) \log(n+m)^2)$ time.

7374 迷途知返

Removing the moves between $n-1$ and $n$ gives us a smaller subproblem. The answer is $\prod_i \binom{(d_{i-1}-1)+d_i}{d_i}$. Alternatively, we can use BEST theorem.

7375 萌二

Let $f(x) = \sum_i f_i x^i$. Conceptually $f = S (1 + f + \mathrm{multiset}(f, 2))$, and precisely $f(x) = \left(\sum_i [i \in S]\right) \left(1 + f(x) + \sum_{i< j} f_i f_j x^{i+j} + \sum_i \frac{f_i(f_i+1)}{2} x^{2i} \right)$. Notice that in order to determine $f_0, \ldots, f_n \bmod 2$, we need $f_0, \ldots, f_{\lfloor n/2 \rfloor} \bmod 4$, and we need $f_0, \ldots, f_{\lfloor n/4 \rfloor} \bmod 8$, and so on. Other than that, divide-and-conquer similar to usual online convolution works in $O(n \log(n)^2)$ time.

A solution using bitsets instead of FFT is possible: We can maintain $e$ bitsets for modulo $2^e$ as the problem size gets exponentially smaller for larger $e$. $O(n^2/w)$ time ($w$ is the word size).

7376 老生常谈

The answer is $\sum_{0\le j_0\le j_1\le\cdots\le j_{m-1}\le j_m=x} j_0^n = \sum_{0\le j\le x} j^n [t^{x-i}] \frac{1}{(1-t)^m}$.

Consider a generating function of $j^n$. We have $\sum_{i,j\ge0} j^n \frac{s^i}{i!} t^j = \frac{1}{1-t\e^s}$, so the answer becomes $[\frac{s^n}{n!} t^x] \frac{1}{1-t\e^s} \frac{1}{(1-t)^m}$.

Let $u = \e^s - 1$ so that we can take modulo $u^{n+1}$. It becomes $[\frac{s^n}{n!} t^x] \frac{1}{(1-t)^m} \sum_{0\le i\le n} \frac{t^i}{(1-t)^{i+1}} u^i = [\frac{s^n}{n!}] \sum_{0\le i\le\min\{n,x\}} \binom{m+x}{m+i} u^i$.

Let $l = \min\{n,x\}$ and $f(u) = \sum_{0\le i\le l} \binom{m+x}{m+i} u^i, g(z) = \sum_{0\le i\le l} g_i z^i = f(z-1)$ (Note that these are polynomials). The answer is $\sum_{0\le i\le l} g_i i^n$ so our goal is to find $g_0, \ldots, g_l$, and the following is a general method which works when $f$ is a truncation of a D-finite series. From $u f'(u) = \sum_{0\le i\le l} \binom{m+x}{m+i} ((m+i) - m) u^i$ we can derive a differential equation: $0 = (xu-m) f(u) - (u^2+u) f'(u) - \binom{m+x}{m+l} (x-l) u^{l+1} + m \binom{m+x}{m}$. Substituting $u=z-1$, we also get a differential equation for $g$, and thus the recurrence $0 = (x-(i-1)) g_{i-1} - (m+x-i) g_i - \binom{m+x}{m+l} (x-l) \binom{l+1}{i} (-1)^{l+1-i} + [i=0] m \binom{m+x}{m}$. It is slightly simpler to go from $g_l = \binom{m+x}{m+l}$ and then compute $g_{l-1}, \ldots, g_0$.

For computing $\binom{m+x}{m+l}$, use an $O(\sqrt{m+x} \log(m+x))$ time method or local precomputation for large factorials. For computing $\frac{1}{x-i}$ for $i=0,\ldots,l-1$ in $O(n)$ time, use prefix products for example. For computing $i^n$ for $i=0,\ldots,l$ in $O(n)$ time, use a prime sieve.

10931 对合排列

The answer does not change if we consider all permutations instead of only involutions: Since $p$ and $p^{-1}$ have the same inversion number, non-involutions cancel modulo $2$. So the problem is to compute the $q$-factorial $\prod_{1\le i\le n} \frac{1-q^i}{1-q}$.

For the numerator, using the $q$-binomial theorem we have $\prod_{1\le i\le n} (1-q^i) = \sum_{0\le j\le n} \frac{(1-q^n)\cdots(1-q^{n-j+1})}{(1-q)\cdots(1-q^j)} q^{j(j+1)/2}$. We can go from larger $j$ to smaller. Using a bitset, multiplying by $1-q^j$ is easy, and dividing by $1-q^j$ is equivalent to multiplying by $(1+q^j) (1+q^{2j}) (1+q^{4j}) \cdots$.

For the final division by $(1-q)^n$, if $-n = \sum_{e\ge0} n_e 2^e$ is the $2$-adic expansion, we have $(1-q)^{-n} \equiv \prod (1-q^{2^e})^{n_e} \pmod{2}$. Overall $O(n \sqrt{k} \log(k) /w)$ time ($w$ is the word size).

It is possible to solve this problem algebraically without noticing the bijection. Let $f_n$ be the generating function, then we have $f_n = f_{n-1} + \frac{q(1-q^{2n-2})}{1-q^2} f_{n-2}$ by caseworking on the last element. Putting $g_n = (1-q)^n f_n$, we get $g_n \equiv (1-q) g_{n-1} + q(1-q^{2n-2}) g_{n-2} \pmod{2}$. With some calculation we can guess $g_n \equiv \prod_{1\le i\le n} (1-(-q)^i) \pmod{2}$ and prove it by induction (In fact $g_n = (1-q) g_{n-1} + q(1-q^{2n-2}) g_{n-2}$ satisfies $g_n = (1-q) g_{n-1} + q(1-q^{2n-2}) g_{n-2}$ without modulo $2$).

10933 最大值

Let's find the probability $p_r$ that $\max\{ a_1,\ldots,a_n \} < r$ for each $1 \le r \le k$. We have $p_r = \frac{1}{n^k} [\frac{x^k}{k!}] \left(\sum_{0\le i< r} \frac{x^i}{i!}\right)^n$.

Let $f_{l,r} = \sum_{l\le i< r} \frac{x^i}{i!}$. We have $(f_{l,r}^m)' = m f_{l,r}^{m-1} \left( f + \frac{x^{l-1}}{(l-1)!} - \frac{x^{r-1}}{(r-1)!} \right)$, so the coefficients of $f_{l,r}^0, \ldots, f_{l,r}^n$ up to $[x^k]$ can be computed in $O(nk)$ time. Using this to compute each $r$ is too slow, so we cut somewhere to use $p_r = \frac{1}{n^k} [\frac{x^k}{k!}] \sum_{0\le m\le n} f_{0,l}^{n-m} f_{l,r}^m$.

Fix a block size $B$. For each $0 \le u \le k/B$ we compute $f_{0,uB}^m$ for each $0 \le m \le n$. This takes $O(nk^2/B)$ time. For $r = uB + v$, we compute $f_{uB,uB+v}^m$ for each $0 \le m \le n$. We only need their non-zero terms, so we can do this in $O(\sum_{0\le m\le n} mB) = O(n^2B)$ time. There is another bound when $uB$ is large: We only need $0 \le m \le k/(uB)$ so we can do this also in $O(k^2/(u^2B))$ time. The sum is $\sum_{0\le u\le k/B} \sum_{0\le v< B} \min\{ n^2B, k^2/(u^2B) \} = O(nkB)$. Select $B = \Theta(k^{1/2})$ to balance, overall $O(n k^{3/2})$ time. We need to be careful of constant factors.

5469 不问天

Apply inclusion-exclusion on the edges originated from $G$. We define $A(x) = \sum_{i\ge0} A_i x^i$ and $B(x) = \sum_{i\ge0} B_i x^i$ so that $A_i$ is the number of (not necessarily perfect) matchings of the complete graph $K_i$, and $B_i$ is $(-1)^{i/2}$ times the number of matchings in $G$ of size $i/2$. Letting $f_{k,i} = [x^i] A B^k$, the answers are $f_{k,kn}$ for $k=1,\ldots,l$.

Finding $B$ is the problem I set without noticing this problem but after studying that problem and that blog... Assume $n$ is even, adding an isolated point if $n$ is odd. For a matching in $G$, if we add $n/2$ edges $(1,2),(3,4),\ldots,(n-1,n)$ to the graph, the edges form cycles and paths, and the number of paths is $n/2$ minus the size of the matching. We can count such cycles and paths for each vertex set by DP in $O(2^{n/2} n^2)$ time. Denoting them by $C, P$ as set power series, what we want is $[x^{\text{whole set}}] C P^i$ for each $i$, so this is the power projection problem. Its transposition is the composition, thus can be done in $O(2^{n/2} n^2)$ time.

The special property of $A$ we use is its D-finiteness. From the recurrence $A_i = A_{i-1} + (i-1) A_{i-2}$, we can derive $A = 1 + x A + x^2 A + x^3 A'$. Applying it to $(A B^k)' = A' B^k + k A B^{k-1} B'$, we have $x^3 (A B^k)' = (A - 1 - x A - x^2 A) B^k + k x^3 A B^{k-1} B'$. Taking $[x^i]$, we get $(i-2) f_{k,i-2} = f_{k,i} - ([x^i] B^k) - f_{k,i-1} - f_{k,i-2} + k \sum_{1\le j\le n} f_{k-1,i-2-j} (j B_j)$. This recurrence is nice in that it writes $f_{k,i}$ without $f_{k-1,i}$. On the other hand, from $A B^k = (A B^{k-1}) B$ we get $f_{k,i} = \sum_{0\le j\le n} f_{k-1,i-j} B_j$ (Note that $B_0 = 1$). These two recurrences allow us to maintain a $2 \times O(n)$ window in the table of $f_{k,i}$: If we have $f_{k-1,i}$ for $(k-1)n \le i \le kn+1$, we can compute $f_{k,kn}, f_{k,kn+1}$ and then $f_{k,kn+2}, f_{k-1,kn+2}, f_{k,kn+3}, f_{k-1,kn+3}, \ldots$ in order, we do so until we get $f_{k,i}$ for $kn \le i \le (k+1)n+1$. We can see that coefficients $[x^i] B^k$ needed in this process are zero as $i > kn$. This runs in $O(ln^2)$ time.

5468 托卡马克

All $(2n-1)!!$ perfect matchings on $2n$ points occurs with the same probability. We can instead count valid bracket ($\pm1$) sequences weighted by the product of prefix sums before each $-1$. The condition becomes that the prefix sums must be $< k$.

The generating function is $f_k := \frac{1}{1-1x\cdot\frac{1}{1-2x\cdot\frac{1}{\ddots\frac{1}{1-kx \cdot0}}}}$. We can write it as $f_k = \frac{p_k}{q_k}$ where $\begin{bmatrix} p_k\\q_k \end{bmatrix} = \begin{bmatrix}0&1\\-1x&1\end{bmatrix} \begin{bmatrix}0&1\\-2x&1\end{bmatrix} \cdots \begin{bmatrix}0&1\\-kx&1\end{bmatrix} \begin{bmatrix}0\\1\end{bmatrix}$. From this we get the recurrence $q_k = q_{k-1} - (k-1)x q_{k-2}$, with initial conditions $q_0 = q_1 = 1$. We can notice that this is exactly the recurrence for the generating function counting (not necessarily perfect) matchings on $k$ points weighted by $(-x)^{\mathrm{size}}$. Thus we can compute the coefficients of $q_k$ in $O(k)$ time. In order to compute $p_k$ in $O(k \log(k))$ time, we can use $\deg(p_k) = \lfloor (k-1)/2 \rfloor$ and trivial prefix of $f_k$ (In small cases all matchings satisfy the condition). Finally compute $\frac{p_k}{q_k}$ in $O(N \log(N))$ time.

1475 矩阵树

We use the multivariate Lagrange inversion of the following form. We use boldface for $k$-tuples like $\mathbf{x}^{\mathbf{n}} = x_1^{n_1} \cdots x_k^{n_k}$. Let $K$ be a field. Assume $g_i(\mathbf{t}) \in K[[\mathbf{t}]]$ ($i=1,\ldots,k$) satisfies $g(0) \ne 0$, and $f_i(\mathbf{x}) \in K[[\mathrm{x}]]$ ($i=1,\ldots,k$) satisfies $f_i(\mathbf{x}) = x_i g_i(\mathbf{f}(\mathbf{x}))$. Then for $h(\mathbf{t}) \in K((\mathbf{t}))$, it holds that $[\mathbf{x}^{\mathbf{n}}] h(\mathbf{f}(\mathbf{x})) = [\mathbf{t}^{\mathbf{n}}] h(\mathbf{t}) \mathbf{g}(\mathbf{t})^{\mathbf{n}} \det\left( [i=j] - \frac{t_j}{g_i(\mathbf{t})} \frac{\partial g_i}{\partial t_j}(\mathbf{t}) \right)_{i,j}$.

Let $f_i(\mathbf{x})$ be the EGF ($[\frac{\mathbf{x}^{\mathbf{n}}}{\mathbf{n}!}] = [\frac{x_1^{n_1}\cdots x_k^{n_k}}{n_1!\cdots n_k!}]$ is the number of ways) for a rooted tree where the root is in part $i$. We have $f_i(\mathbf{x}) = x_i \exp\left( \sum_{j=1}^k a_{i,j} f_j(\mathbf{x}) \right)$. We marking a vertex in part $1$ as a root and divide by $n_1$ afterward, so the answer is $\frac{1}{n_1} [\frac{\mathbf{x}^{\mathbf{n}}}{\mathbf{n}!}] f_1(\mathbf{x})$. Applying the theorem, it becomes $[\frac{\mathbf{t}^{\mathbf{n}}}{\mathbf{n}!}] \frac{t_1}{n_1} \exp\left( \sum_{i=1}^k \sum_{j=1}^k n_i a_{i,j} t_j \right) \det\left( [i=j] - a_{i,j} t_j \right)_{i,j}$. This determinant can take each $t_j$ only in column $j$, so we can write the answer as $\det\left( [i=j] s_{j,0} - a_{i,j} s_{j,1} \right)_{i,j}$ where $s_{j,l} = [\frac{x_j^{n_j-[j=1]}}{(n_j-[j=1])!}] \exp\left( \sum_{i=1}^k n_i a_{i,j} t_j \right) t_j^l$. Computing $s_{j,0}, s_{j,1}$ is easy, then we compute the determinant in $O(k^3)$ time.

414 一般难度的推式子题

The problem is equivalent to considering a valid bracket sequence of length $2(n-1)$ and its maximum depth. We count the number of sequences with depth $< h$ for each $h$. By the reflection method, we can write it with $O(n/h)$ binomial coefficients. $O(n \log(n))$ time overall.

1640 丧钟为谁而鸣

The EGF is $B(x) := \e^{\e^x-1}$. Using $B'(x) = B(x) \e^x$ repeatedly, we get $\frac{\d^m}{\d x^m} B(x) = \sum_{i=0}^m S(m,i) B(x) \e^{ix}$ ($S$ denotes the Stirling number of the second kind). Its combinatorial meaning is to fix $m$ elements and casework how they are partitioned. Inverting it, we get $B(x) \e^{mx} = \sum_{i=0}^m s(m,i) \frac{\d^i}{\d x^i} B(x)$ ($s$ denotes the signed Stirling number of the first kind). Taking $[\frac{x^n}{n!}]$, we get $\sum_{j=0}^n \binom{n}{j} B_{n-j} m^j = \sum_{0\le i\le m} s(m,i) B_{n+i}$.

We can use the identity with $m = p$ to compute $B_{n+p}$ from $B_{n-k},\ldots,B_{n+p-1}$, modulo $p^k$. We cannot get $B_0, \ldots, B_{p-1}$ in this way, so use the identity with $m = 1$ to compute $B_{n+1}$ from $B_0,\ldots,B_n$. This runs in $O(p^2 + N(p+k))$ time.

1560 骄傲与傲慢

Let $m = 1 + \sum_{i=1}^k b_i$ and $C(y) = y^{-1} + \sum_{i=1}^k b_i y^{a_i}$.

What we want to count is the number of pairs $(S, T)$ of sequences of moves (whose lengths sum to $n$) such that $S$ takes us from $0$ to some $j$ and reach $j$ at the end for the first time. The OGF for $T$ is $\frac{1}{1-mx}$. Any sequences of moves is uniquely split into $(S, U)$ such that $S$ has the same condition as above and $U$ takes us from $0$ to $0$, so the OGF for $S$ is $\frac{1}{(1-mx)Q(x)}$ where $Q(x)$ is the OGF for $U$. So our goal is to compute $[x^n] \frac{1}{(1-mx)^2 Q(x)}$.

We have $[x^n] Q(x) = [y^0] C(y)^n$. In order to use Lagrange inversion, let $D(y) = C(y)^{-1} = y + O(y^2)$ and $E = D^{\langle-1\rangle}$. Then $[x^n] Q(x) = [y^0] D(y)^{-n} = [x^0] x^{-n} (x/E(x))^{0+1} E'(x) = [x^n] x E'(x)/E(x)$, so $Q(x) = x E'(x)/E(x)$.

The answer is $[x^n] \frac{E(x)}{(1-mx)^2 x E'(x)}$. In order to use Lagrange inversion again, we use $D'(E(x)) E'(x) = \frac{\d}{\d x} D(E(x)) = 1$ to write it as $[x^n] F(E(x))$ with $F(y) = \frac{y D'(y)}{(1-mD(y))^2 D(y)}$. We have $[x^n] F(E(x)) = [y^n] F(y) (y/D(y))^{n+1} D'(y)$. Use $D'(y) = -C(y)^{-2} C'(y)$ to write it in $C$ and get $[y^n] \frac{(yC(y))^n (y^2C'(y))^2}{(yC(y)-my)^2}$. We can compute $(yC(y))^n$ in $O(nk)$ time then multiply it by sparse series in $O(nk)$ time.

145 二十

As a standard orbit-counting, we want to compute for each element of the rotation group of the icosahedron the number of paintings fixed by that rotation. The rotations and the orbits of small triangles are:

  • Identity ($1$ rotation): $20n^2$ orbits of size $1$.
  • $\pm1/5$ or $\pm2/5$ rotation around an axis passing through two vertices ($24$ rotations): $4n^2$ orbits of size $5$.
  • $1/2$ rotation around an axis passing through two edge midpoints ($15$ rotations): $10n^2$ orbits of size $10$.
  • $\pm1/3$ rotation around an axis passing through the face centers ($20$ rotations): $\lfloor 20n^2/3 \rfloor$ orbits of size $3$ and $(20n^2 \bmod 3)$ orbits of size $1$.

Note that for the $\pm1/3$ rotation case the colors of the fixed points are determined by $a_i \bmod 3$. The number of assignment of colors to the orbits is a multinomial coefficient: Use an $O(\sqrt{20n^2} \log(20n^2))$ time method or local precomputation for large factorials. After fixing it we can write the OGF counting the price by $x$ as the product of $O(k)$ polynomials of the form $\left( \sum_{0\le j\le d} x^{ej} \right)^c$ for some $(c,d,e)$. So we can write as $f = \prod_i (1-x^{c_i})^{d_i}$. We use $f' = f \sum_i -c_id_i \frac{x^{c-1}}{1-x^c}$, and we can compute $f$ and $\frac{f}{1-x^c}$s in parallel. This takes $O(mk)$ time.

1641 拆分整数

For a multiset in question, let $a_i$ be the multiplicity of $2^{-i}$ (so $n = \sum_{i\ge0} a_i$ and $k = \sum_{i\ge0} a_i 2^{-i}$) and define sequence $b$ by $b_0 = k/2$ and $b_{i+1} = 2 b_i - a_i$. Its combinatorical meaning is that $b_{i+1}$ is the remaining sum to make after using $2^0, 2^{-1}, \ldots, 2^{-i}$, in units of $2^{-i}$. From some $m$ we have $b_1,\ldots,b_{m-1} > 0$ and $b_m = b_{m+1} = \cdots = 0$. We note that $n = \sum_{i\ge0} (2 b_i - b_{i+1}) = 2 b_0 - \sum_{i\ge1} b_i$. Checking the inverse correspondence, we can rephrase the problem into counting $(b_1, \ldots, b_{m-1})$ for some $m \ge 1$ satisfying the following conditions:

  • $b_i$ is a positive integer.
  • $k \ge b_1$.
  • $2 b_i \ge b_{i+1}$ for $i = 1, \ldots, m-2$.
  • $n-k = \sum_{1\le i\le m-1} b_i$.

Apply inclusion-exclusion for $2 b_i \ge b_{i+1}$: This is tempting because the chain of $2 b_i < b_{i+1}$ cannot be long. For a chain $(c_1, \ldots, c_l)$ such that $2 c_i < c_{i+1}$, if we let $d_i = c_i - 2 c_{i-1} \ge 1$ (where $c_0 = 0$) then $\sum_{1\le i\le l} c_i = \sum_{1\le i\le l} (2^{l-i+1}-1) d_i$. So its generating function, counting the sum by $x$, is $f_l := \prod_{1\le j\le l} \dfrac{x^{2^j-1}}{1-x^{2^j-1}}$. Hence the generating function for $b$ without considering $k \ge b_1$ is $g := \dfrac{1}{\sum_{l\ge0} (-1)^l f_l}$ where $f_0 := 1$.

For $k \ge b_1$ it is also clean to apply inclusion-exclusion. If we impose $k < c_1$ on the first chain $(c_1, \ldots, c_l)$, we only need to modify $d_1 \ge 1$ to $d_1 \ge k+1$. Therefore the answer is $[x^{n-k}] \sum_{l\ge0} (-1)^l x^{k(2^l-1)} f_l g$.

Now the problem is computing $f_l g$ for $l \le \log_2(N) + O(1)$. An easy $O(N \log(N))$ time solution is to first compute $f_0, f_1, \ldots$ in order ($O(N)$ time each), then $g$ ($O(N \log(N))$ time with FFT), finally $f_0 g, f_1 g, \ldots$ in order ($O(N)$ time each). A faster $O(N \log(N))$ time solution without FFT is to compute all $f_l g$ in parallel: For each $i=1,\ldots,N$, first compute $[x^i] f_l g$ from previous terms and then $[x^i] g$ using $g = -\sum_{l\ge1} (-1)^l f_l g$.

907 万花筒

Follow the Kruskal's algorithm. After checking all edges $(u,v)$ such that $((v-u) \bmod n) \in D$, the connected components are residue classes modulo $\gcd(n, \gcd(D))$.

908 哈密顿环

Fix two vertices $s, t$. We can regard a Hamiltonian cycle as an ordered pair of $s$-$t$ paths, so that the first path has smaller bitwise representation of its vertex set. The $s$-$t$ paths can be counted by usual DP, with $O(2^n n^2)$ multiplications.

909 数点

Calculate the contribution of each set of $\le 3$ points. The cases for $\le 2$ points are easier. For $3$ points, let $i_0 < i_1 < i_2$ and casework on the order of $p_{i_0}, p_{i_1}, p_{i_2}$. By symmetry, there are essentially $2$ cases:

  • $p_{i_0} < p_{i_1} < p_{i_2}$: Sweep $i$ and compute contribution of each $i_1$ using BIT.
  • $p_{i_0} < p_{i_2} < p_{i_1}$: Sweep $i$ in decreasing order. Maintain a segment tree indexed by $p_{i_2}$. The effect of point $(i_1, p_{i_1})$ can be seen as multiplying a polynomial $1+ax$ modulo $x^2$ to range $[1, p_{i_1})$.

$O(n \log(n))$ time.

910 精shèn细腻

We can reduce $m$ to $[0, \log_2(M) + M \varphi(M))$ using periodicity. Below we omit the time complexity of factorizing $M$ and reducing $m$.

A naive $O(n \log(M))$ solution is binary exponentiation maintaining the pair $(k^i, k^0 + k^1 + \cdots + k^{i-1})$. It seems difficult to pass, but not too slow, so constant factors are somewhat important for this problem.

Say we want to compute modulo a prime power $p^e$. The easier case is when $k \not\equiv 1 \pmod{p}$: Use $k^0 + k^1 + \cdots + k^{m-1} = \frac{k^m-1}{k-1}$. All $k^m$ can be computed in $O(n + n \log_n(M))$ time using prime sieve, and the multiplicative inverses of all $k-1$ can be computed in $O(n + \log(M))$ time using prefix products. Note that when we compute the answer modulo $M$, the $k^m$ part should be done once modulo $M$, and then classify $k$ by $\mathrm{rad}(\gcd(k-1, M))$ so that each $k$ contributes only once for inverse computation. $O(n + n \log_n(M) + 2^{\omega(M)} \log(M))$ time overall for this case.

The harder case is when $k \equiv 1 \pmod{p}$. Letting $k = 1 + x p$, we have $k^0 + k^1 + \cdots + k^{m-1} = \sum_{i\ge1} \binom{m}{i} x^{i-1} p^{i-1}$, and the terms with $i > e$ vanishes modulo $p^e$. Precompute all $\binom{m}{i}$ by counting $v_p(*)$, then the computation for each $n$ becomes very simple and fast. $O(\sum_{(p,e)} (e \log{M} + (n/p) e))$ time. Specializing $p = 2$ to use unsigned makes the constant factor even better.

More optimization is possible for larger $e$ by another layer of $p$-adic expansion. Let $q = p^{\lceil e/2 \rceil}$ and write $x = y + z q$. Since $q^2$ vanishes, we have $x^i \equiv y^i + i y^{i-1} z q \pmod{p^e}$. If we precompute the sums for each of the terms $y^i$ and $i y^{i-1} q$, for each $y$ (there are $\min\{q, n/p\}$ of them), we can answer each query in $O(1)$ time. $O(\sum_{(p,e)} (e \log{M} + \min\{q, n/p\} \cdot e + n))$ time overall for this case.

911 答案补全

Compute modulo $10^9+7$.

912 对角线

Inclusion-exclusion on the conditions that each row has at least $2$ kinds.

913 蝴蝶图

Let $E_L$ be the set of edges with both endpoints in $L$ but not both in $L \cap R$. Build a minimum spanning forest $F_L$ of $(L, E_L)$. Edges in $E_L$ but not in $F_L$ are useless no matter how edges within $L \cap R$ are used. Most of the edges in $F_L$ will be used: We consider compressing this forest by $L \cap R$, that is, the virtual tree $F'_L$ (possibly disconnected). Edges in $F_L$ but not in $F'_L$ must be used. For an edge in $F'_L$, it corresponds to a path in $F_L$, and its edges except the one with the maximum weight must be used. That maximum weight can be regarded as the cost of the edge in $F'_L$.

Doing the same for $R$, we get a kernel of $O(|L \cap R|)$ size. Try all partition of $L \cap R$ as the connected components by the selected edges within $L \cap R$, then compute the costs for $L$ and $R$ independently. $O(B_{|L \cap R|} \mathrm{poly}(|L \cap R|))$ time ($B$ denotes the Bell number, $B_{11} = 678570$).

914 分身术

The problem can be formulated as a minimum cost flow on the network with the following edges:

  • $x \to x+1$, capacity $\infty$, cost $0$
  • $l_i \to r_i$, capacity $1$, cost $-1$

As a tie-breaker, assume that $l_1,r_1,\ldots,l_n,r_n$ are distinct and modify the cost of the edge $l_i \to r_i$ to be $-(1 + \varepsilon^{r_i})$, so that we have a unique optimal flow $f_k$ for each $k$, in favor of using intervals with small $r_i$.

Consider the residual graph of $f_k$. We claim that its shortest path does not contain any positive edge (pushing back $r_i \to l_i$). Assume the contrary and let $r \to l$ be the last positive edge on the shortest path. Below we denote path following $x \leftrightarrow x+1$ with cost $0$ by $\Rightarrow$.

  • If there is a negative edge after $r \to l$, let $l' \to r'$ be the first such edge.
    • If $r < r'$, we could replace $r \to l \Rightarrow l' \to r'$ with $r \Rightarrow r'$ in the shortest path, contradiction.
    • If $r' < r$, there is a negative cycle $r \to l \Rightarrow l' \to r' \Rightarrow r$ in the residual graph, contradiction.
  • If there is no negative edge after $r \to l$, we could remove $r \to l \Rightarrow r$ in the shortest path, contradiction.

Therefore the optimal set of used intervals increases monotonically for $k$ with this tie-breaker. Now the similar exchange argument as the standard greedy solution of interval scheduing ($k = 1$) works: When we find $f_k$, we repeatedly select the interval with the smallest $r_i$ which does not cause an overlap of $> k$ intervals. We want the rightmost position $x$ where exactly $k$ intervals overlap, which can be maintained with a segment tree. Getting smallest $r_i$ with $x < l_i$ can also be done with a segment tree. $O(n \log(n))$ time.

4047 山河重整

Let $T$ be a set of positive integers. Let $T$'s elements be $t_0 < t_1 < \cdots < t_{k-1}$ and put $t_k = \infty$. If we take minimum $j$ such that $t_0 + \cdots + t_{j-1} + 1 < t_j$, then we can see that $t_0 + \cdots + t_{j-1} + 1$ is the minimum non-negative integer which is not a subset sum of $T$.

Let $a_s$ be the number of such prefixes with sum $s$, or precisely, the number of sets of positive integers such that the sum is $s$ and every integer in $[0, s]$ is a subset sum. The answer is $2^n - \sum_{0\le s< n} a_s 2^{n-s-1}$. Classifying sets of positive integers by their minimum non-subset-sum, we have $\prod_{i\ge1} (1+x^i) = \sum_{s\ge0} a_s x^s \prod_{i\ge s+2} (1+x^i)$.

When we have $a_0,\ldots,a_{m-1}$, we can calculate $a_m,\ldots,a_{2m-1}$ by $\sum_{m\le s< 2m} a_s x^s \equiv \prod_{i\ge1} (1+x^i) - \sum_{0\le s< m} a_s x^s \prod_{i\ge s+2} (1+x^i) \pmod{x^{2m}}$. We use a partition identity $\prod_{i\ge r} (1+x^i) = \sum_{k\ge0} \dfrac{x^{kr+k(k-1)/2}}{(1-x)(1-x^2)\cdots(1-x^k)}$: Combinatorically, this classifies partitions by the number $k$ of parts, then subtracts $r,r+1,\ldots,r+k-1$ from the parts and considers the transposition of Young diagrams. We can go from larger $k$ to smaller, each time divinging by $1-x^k$ and adding something. This runs in $O(m^{3/2})$ time, so overall it is $O(n^{3/2})$ time.

4060 多边形

First assume that there are $b_i$ edges on the $i$-th edge of the $n$-gon (so $(b_i-1)$ out of the $(a_i-1)$ inside points are actually used). This contributes the answer with weight $\prod_{i=1}^n \binom{a_i}{b_i}$. We slightly inflate the polygon and count the triangulation of the $(\sum_{i=1}^n b_i)$-gon with the conditions that some pairs of points must not be connected. Apply inclusion-exclusion on those $\sum_{i=1}^n \binom{b_i}{2}$ conditions. Assume that after we force to connect some pairs the "inner" polygon is a $(\sum_{i=1}^n m_i)$-gon, the $b_i$ edges are split into $c_1,\ldots,c_{m_i}$ in order ($b_i = \sum_{j=1}^{m_i} c_{i,j}$). We fix $c_{i,j}$s then count the number of ways to force with weight $-1$ for each forced connection and triangulate except for the inner polygon. Such count should be the product of:

  • $c_{i,j} = 1$ means that the edge of the inner polygon is an outermost edge. The total weight is $+1$.
  • $c_{i,j} \ge 2$ means a $(c_{i,j} + 1)$-gon is attached outside. The total weight is $-[c_{i,j}=2]$ because every time we fix a triangulation of this $(c_{i,j} + 1)$-gon its any edge contributes $(1 + (-1))$.

Thus the total after fixing $c_{i.j}$s is $C_{(\sum_{i=1}^n m_i) - 2} \prod_{i=1}^n \prod_{j=1}^{m_i} [x^{c_{i,j}}] (x-x^2)$ ($C_k$ denotes the catalan number), and the total after only fixing $b_i$'s is $T\left( \prod_{i=1}^n [x^{b_i}] \frac{1}{1-(x-x^2)y} \right)$ where $T$ is a linear operator such that $y^k \mapsto C_{k-2}$. Therefore the final answer is $T\left( \prod_{i=1}^n [x^{a_i}] \frac{(1+x)^{a_i}}{1-(x-x^2)y} \right)$.

Computing the coefficients of $[x^{a_i}] \frac{(1+x)^{a_i}}{1-(x-x^2)y}$ is exactly the power projection problem, and can be done with Lagrange inversion in $O(a_i \log(a_i))$ time, as we can get the compositional inverse of $x-x^2$ explicitly. Multiplying the resulting $n$ polynomials is the bottleneck, $O((A \log(A) \log(n))$ time where $A = \sum_{i=1}^n a_i$.

8956 欧拉?欧拉!

Inclusion-exclusion on descent. We want to count $(i, p)$ such that $0 = i_0 < i_1 < \cdots < i_a = n$ and $p_{i_{b-1}+1} > p_{i_{b-1}+2} > \cdots > p_{i_b}$ ($1 \le b \le a$), for each $1 \le a \le n$ and for each excedance number. Each block $p(i_{b-1}, i_b]$ is uniquely cut into a prefix and a suffix so that the prefix has excedances and the suffix has excedances, so it is equivalent to count $(i, j, p)$ such that $0 = i_0 < i_1 < \cdots < i_a = n$, and $i_{b-1} \le j_b \le i_b$ and $p_{i_{b-1}+1} > p_{i_{b-1}+2} > \cdots > p_{j_b} \ge j_b + 1 \ge p_{j_b+1} > p_{j_b+2} > \cdots > p_{j_b}$ ($1 \le b \le a$).

Now we can run DP in $O(\mathrm{poly}(n))$ time, making the matching of $i$ and $p_i$ from the smaller. Careful $O(n^6)$ can pass. Below we seek faster solutions.

Note that $(j, p)$ uniquely determines $i$: In a block $p(j_b, j_{b+1}]$, the values should be in $(0, j_b + 1]$ or $(j_{b+1}, n]$, so if $j_b < j_{b+1}$, the block is uniquely cut into a prefix and a suffix. If $j_b = j_{b+1}$, we know $j_b = i_b = j_{b+1}$.

We can relax $i_{b-1} < i_b$ to $i_{b-1} \le i_b$ by inclusion-exclusion. So we want to count $(j, p)$ with weight $\frac{1}{(j_1-0)! (j_2-j_1)! (j_3-j_2)! \cdots (j_a-j_{a-1})! (n-j_a)!}$, such that $0 \le j_1 \le j_2 \le \cdots \le j_a \le n$ and:

  • Block $0$: In $p(0, j_1]$, values are in $(j_1, n]$ (excedance).
  • Block $b$ ($1 \le b \le a-1$): In $p(j_b, j_{b+1}]$, values are in $(0, j_{b-1} + 1]$ (non-excedances) or $(j_b, n]$ (excedance).
  • Block $a$: In $p(j_a, n]$, values are in $(0, j_a + 1]$ (non-excedance).

Example

We further apply inclusion-exclusion on banned elements and excedances. We first select $j$, then select some banned elements, so the excedances can be selected in a Young diagram. Letting $c_b$ be the length of block $b$ and $d_b$ be the number of banned elements selected in block $b$, then the Young diagram has $(c_b - d_b)$ rows of length $\sum_{b< b'\le a} (c_b - d_b)$ for each $0 \le b \le a$. Here we use a theory for rook polynomials. Let $l_1 \le l_2 \le \cdots \le l_m$ be the lengths of rows of a Young diagram, and let $r_k$ be the number of ways to place $k$ non-attacking rooks in it. Then we have $\sum_{0\le k\le m} r_k z^{\underline{m-k}} = \prod_{1\le i\le m} (z + l_i - i + 1)$. In our case the right-hand side becomes $\prod_{0\le b\le a} z^{\underline{c_b-d_b}}$. Finally we select some excedances ("place some rooks") and then arbitrary permutations on remaining indices and values.

What we want is the sum of those polynomials over all $j$ and the ways to select banned elements, for each $m = \sum_{0\le b\le a} (c_b-d_b)$. We can write it as $[x^n y^m] F(x,y,z) G(x,y,z)^a$ ($F$ for block $0$, and $G$ for blocks $1,\ldots,a-1,a$). We can interpolate $y$ and $z$ and then solve the power projection problem. Overall this is $O(n^3 \log(n)^2)$ time with FFT. Without FFT, square root decomposition for power projection achieves $O(n^{4.5})$ time.

1930 Koishi's Unconscious Permutation

Some parts are similar to [1928 Slime and Sequences].

It easily reduces to the case $s=0$. Apply inclusion-exclusion on the conditions $[p_i + 1 = p_{i+1}]$. Counting descents by $y$, we want $\mathrm{ans}(y) = \sum_{0\le i\le n-1} \binom{n-1}{i} (-1)^i \sum_{0\le k\le n-1} A(N-i,k) y^k$, where $A$ denotes the Eulerian numbers. Using $\sum_{n,k\ge0} A(n,k) \frac{x^n}{n!} y^k = \frac{1-y}{1-y\e^{x(1-y)}}$, we have $\mathrm{ans}(y) = (n-1)! [x^{n-1}] \e^{-x} \left(\frac{\d}{\d x} A(x,y)\right) = (n-1)! y (1-y)^2 [x^{n-1}] \frac{\e^{-xy}}{(1-y\e^{x(1-y)})^2}$.

Substitute $z = x(1-y)$, then $\mathrm{ans}(y) = (n-1)! y (1-y)^{n+1} [z^{n-1}] \dfrac{\e^{\frac{-zy}{1-y}}}{(1-y\e^z)^2}$. Use Lagrange inversion with $t = e^z - 1$, then $\mathrm{ans}(y) = (n-1)! y (1-y)^{n+1} [t^{n-1}] \dfrac{(1+t)^{\frac{-y}{1-y}}}{(1-y(1+t))^2} B(t)$ where $B(t) = \left(\frac{y}{\log(1+t)}\right)^n \frac{1}{1+t}$. Alternatively, we can just set $t = \e^z - 1$ so that we can take modulo $t^n$, and compute Stirling numbers of the second kind to get the equivalent formula.

Noticing $\frac{1}{(1-y(1+t))^2} = \sum_{i\ge0} (i+1) \frac{y^i}{(1-y)^{i+2}} t^i$, we set $w = \frac{y}{1-y}$, then $\mathrm{ans}(y) = (n-1)! y (1-y)^{n-1} [z^{n-1}] \left(\sum_{i\ge0} \binom{-w}{i} t^i\right) \left(\sum_{i\ge0} (i+1) w^i t^i\right) B(t)$.

Let $C_i = \sum_{0\le j\le i} \binom{-w}{i-j} w^j$ and $D_i = \sum_{0\le j\le i} \binom{-w}{i-j} (j+1) w^j$. We have recurrences $\binom{-w}{i} = \binom{-w}{i-1} \frac{-w-(i-1)}{i}, C_i = w C_{i-1} + \binom{-w}{i}, D_i = w D_{i-1} + C_i$. Thus $\sum_i D_i [t^{n-1-i}] B(t)$ can be represented by a product of matrices whose coefficients are polynomial in $w$ of degree $\le 1$. Divide and conquer achieves $O(n \log(n)^2)$ time.

Finally replace $w^i$ with $y^i (n-1-i)^i$. This is a single convolution.

6268 指点江山

Consider $(a_1, \ldots, a_n, -(a_1 + \cdots + a_n))$. The operation swaps the $i$-th and the last elements, so we can permute it arbitrarily.

6269 模棱两可

We want to check if there exists an element occurring $\ge k$ times in $[l,r]$, for $O(\log(n))$ values of $k$. Sweep $l$, and for each $k$ maintain the minimum $r$ satisfying it, by locating the $(k-1)$-th next occurrence of $a_l$. $O((n+q) \log(n))$ time.

6270 选图

Let's paint the vertices in one of the colors $1,\ldots,k$, starting from any painting. As long as there exists a vertex $u$ which is connected to $\ge d$ vertices of $u$'s color, repaint $u$ with the fewest color appearing in $u$'s neighbor. The condition $\mathrm{deg}(u) < kd$ ensures the fewest color appears $< d$ times, and this process will terminate because the number of edges connecting the same color strictly decreases. Finally at least one of the $k$ colors have $\ge n$ vertices and they satisfy the requirement. Maintaining the violating vertices with a queue, we achieve $O(k^3 n d^2)$ time.

6271 和谐社会

Bottom-up DP with $\sum (w_* - 1) \in [-n,n]$ as a state works in $O(n^2 \log(n))$ time with FFT. We can save a factor of $2$ by limiting to $[-|\mathrm{subtree}(u)|, n - |\mathrm{subtree}(u)|]$.

Top-down DP can find the answer for the case containing root. Specifically, $\mathrm{dfs}(u, f)$ receives a series $f$ and returns $fg$ where $[x^s] g$ is the sum of the probability over the components below $u$ such that $\sum (w_* - 1) = s$. Since we can multiply a series by $\frac{1}{a+1} (x^{-1} + \cdots + x^{a-1})$ in $O(n)$ time, this works in $O(n^2)$ time. By selecting a centroid as the root and recursively solving subproblems, it is $O(n^2)$ time overall.

6272 克莱因瓶

There are two cases: Use $(0,j) \leftrightarrow (n,n-j)$ once or never.

6273 马克思么克斯

It is possible to obtain $O(n)$ tuples $(l_0,l_1,r_0,r_1,b)$ so that $\mathrm{mex}\{a_l,\ldots,a_r\} = b$ for all $l_0 \le l \le l_1$ and $r_0 \le r \le r_1$ and these tuples exactly cover all $1 \le l \le r \le n$. After that it is easy to finish using an erasable priority queue in $O(n \log(n))$ time.

Sweep $l$. We maintain the intervals of $r$ according to the mex of $[l,r]$. When going $l+1 \to l$, one interval is created or extended for $l=r$, and at most one interval (mex being $a_l$) is broken into some intervals. In order to bound the number of tuples, count when an interval is broken (or remains at the end). A value-to-position segment tree can handle these updates in $O(n \log(n))$ time.

6274 周游世界

A naive solution is to explore the states of $(s, t, \mathrm{weight})$. To a transition (except for some which used $\le 1$ edges) we can assign a tuple of edges $(i,j,k)$ with $ijk \le m$ such that a walk to there starts with $i$ and ends with $j$, and the next edge is $k$. A tuple $(i,j,k)$ is assign to at most $4$ transitions, so we can bound the number of transitions by $O(m \log{m}^2)$. It is $\Theta(m \log{m})$ for a star and I don't know the worst case (It would be $\Theta(m \log(m)^2)$ if multi-edges were allowed).

We can do DP with $\lfloor \frac{m}{\mathrm{weight}} \rfloor$ as a key. Fix the first edge $i$ to use and its direction. We compute:

  • $f(j,0/1,x)$ - the maximum possible weight when the last edge is $j$ in the designated direction, and $\lfloor \frac{m}{\mathrm{weight}} \rfloor \le x$.
  • $g(u,x)$ - the maximum possible weight at vertex $u$, and $\lfloor \frac{m}{\mathrm{weight}} \rfloor \le x$.

in decreasing order of $x$, updating $g$ in-place. The time complexity is proportional to the number of states for $f$, so $\sqrt{m/i} + \sum_{1\le j\le m/i} \sqrt{m/ij} = O(m/i)$. $O(m \log(m))$ time overall.

6275 大度一点

For each $r \in [0,n]$ and $S \subseteq [n]$, let $\mathcal{G}(r,S)$ be the set of subgraphs within vertex set $S$ such that if $u \in [r] \cap S$ then $\deg(u) \ge 2$. We want to go from $r=0$ to $r=n$: $\#\mathcal{G}(0,S)$ is trivial and the answer is $\#\mathcal{G}(n,S)$. This idea is similar to biconnected graph counting.

We want to count subgraphs in $\mathcal{G}(r,S) - \mathcal{G}(r-1,S)$. There are $2$ cases:

  • $\deg(r) = 0$. By removing $r$, it corresponds to a subgraph in $\mathcal{G}(r-1, S-\{r\})$.
  • $\deg(r) = 1$. By successively removing a leaf with id $\le r$, we obtain the following structure uniquely: $H$-$u_1$-$u_2$-$\cdots$-$u_k$-$r$ ($k \ge 0, \{u_1,\ldots,u_k\} \subseteq [r-1], H \in \mathcal{G}(r-1,S-\{u_1,\ldots,u_k\}-\{r\}$). We can count these structures by DP, starting from $H$ and having the last vertex $u_i$ as a key, in $O(2^n r^2)$ time.

$O(2^n n^3)$ time overall, with a relatively small constant factor.

5448 另一个欧拉数问题

Let $f_{i,j}$ be the number of $(i,\alpha)$-sequence such that there are $j-1$ descents (We define $f_{0,j} = [j=0]$). Since $\alpha$ $i$s must be adjacent altogether, we have $f_{i,j} = (\alpha(i-1)+1-(j-1)) f_{i-1,j-1} + j f_{i-1,j}$. Letting $f_i(x) = \sum_j f_{i,j} x^j$, we have $f_i(x) = \left((\alpha(i-1)+1)x - x(1-x)\frac{\d}{\d x}\right) f_{i-1}(x)$.

We want the transformation independent of $i$. With a hint from Eulerian numbers, we can define $g_i(x) = (1-x)^{-(\alpha i+1)} f_i(x)$ to get $g_i(x) = x (1-x)^{-\alpha} \frac{\d}{\d x} g_{i-1}(x)$.

We want to "diagonalize" this transformation: If we have $h_i(y) = y \frac{\d}{\d y} h_{i-1}(y)$ with a change of variables, then it is easy to get $h_n(y)$ from $h_0(y)$. We solve the differential equation $y \frac{\d}{\d y} = x (1-x)^{-\alpha} \frac{\d}{\d x}$, getting $y = x \exp\left( \int \d x \frac{(1-x)^\alpha-1}{x} \right)$.

It is now easy to finish with composition and compositional inverse in $O(n \log(n)^2)$ time. To achieve $O(n \log(n))$ time, we use Newton method to express $x$ in $y$, so that we can compute $h_0(y)$, and $[x^m] g_n(x)$ by Lagrange inversion.

8329 Excuse

Let $f_i$ be the answer for $i$ coins. Think of the problem as if we toss all coins in parallel to derive $f_i = \sum_{0\le j< i} \frac{\binom{i}{j}}{2^i} (1 + f_j)$. This immediately leads to a fast solution with online convolution in $O(n \log(n)^2)$ time.

$O(n \log(n))$ time (perhaps slower) is possible. Let $f(x) = \sum_{i\ge0} f_i \frac{x^i}{i!}$. The answer is the sum of the probability that $\mathrm{mex} \ge i$ over $i \ge 1$. The EGF for a non-empty set of coins generating $j$ is $\e^{x/2^j} - 1$, so we have $f(x) = \sum_{i\ge1} \left( \prod_{1\le j\le i} (\e^{x/2^j} - 1) \right) \e^{x/2^i}$ (This converges formally). Note that we can obtain the same formula from the recurrence above: We get $f(2x) = (\e^x - 1) (\e^x + f(x))$, so apply it repeatedly.

Letting $g(x) = \prod_{j\ge 1} \frac{\e^{x/2^j} - 1}{x/2^j}$ (This does not converge formally! Strictly speaking, we just use absolute convergence in $\mathbb{R}$ of each coefficients of $\log(g(x))$). We can compute it from $\log\left( \frac{\e^x - 1}{x} \right)$. We now have $f(x) = \sum_{i\ge1} 2^{-i(i+1)/2} x^i \frac{g(x)}{g(x/2^i)} \e^{x/2^i} = g(x) \sum_{i\ge1} \sum_{j\ge0} 2^{-i(i+1)/2} 2^{-ij} \left( [x^j] \frac{\e^y}{g(y)}\right) x^{i+j}$, so use $ij = \binom{i+j}{2} - \binom{i}{2} - \binom{j}{2}$ to convolve.

8340 3 Sum

First replace $a_i$ with $a_i \bmod M$. This can be done in linear time because every time a carry happens the sum of digits decrease by $9$.

Then we want to count $(i,j,k)$ such that $a_i + a_j + a_k \in \{0,M,2M\}$. We can use a hash $a_i \bmod p$ for a random prime $p$. For this part $O(nK)$ time plus $O(n^2)$ time with a hashmap. If we use $t$ random primes uniformly from $(l,r]$, the failure probability is bounded by $\frac{N(N+1)(N+2)}{6} \left(\frac{\log_l(3M)}{\pi(r)-\pi(l)}\right)^t$. Reasonable choices are for example $(t, l, r) = (3, 10^8, 10^9), (1, 10^{17}, 10^{18})$.

7895 Graph Partitioning 2

Consider bottom-up DP, with the size of the current component as a state. It is $\le k+1$. Let $s$ be the subtree size. If we cut $i$ components of size $k$ below, then the size of the current component is determined by $(s-ik) \bmod (k+1)$. So there are $\le \frac{s}{k}$ possibilities.

Sice we have bounded the number of states both by the subtree size and $O(\sqrt{n})$, we achieve $O(n\sqrt{n})$ time with naive merging.

7962 前缀和

The process is equivalent to repeatedly tossing a coin with success probability $p$, and if we have a success in the $k$-th toss record $k$ as a $y_i$. The answer is $p(r-l+1)$.

7968 分治乘法

Note that merging $k$ disjoint sets with total size $m$ takes a cost of $O(m \log(k))$ and the optimal way is Huffman's algorithm.

Divide the string into blocks of size $B$ and write $T = \bigsqcup_{S\subseteq[0,B)} \bigsqcup_{x\in X_S} (S + x)$. We can obtain $T$ as follows:

  1. For each $S \subseteq [0,B)$, create $X_S$ by merging singletons. The cost is $O(\sum_S |X_S| \log(|X_S|)) = O(\frac{n}{B} \log(\frac{n}{B}))$.
  2. For each $S \subseteq [0,B)$ and $s \in S$, create $X_S + s$ by shifting. The cost is $O(\sum_S |X_S| B) = O(n)$.
  3. For each $S \subseteq [0,B)$, create $\bigsqcup_{x\in X_S} (S + x) = \bigsqcup_{s\in S} (X_S + s)$ by merging. The cost is $O(\sum_S |X_S| B \log(B)) = O(n \log(B))$.
  4. Create $T = \bigsqcup_{S\subseteq[0,B)} \bigsqcup_{s\in S} (X_S + s)$ by merging. The cost is $O(n \log(2^B)) = O(nB)$.

We can achieve a total cost of $O(n \sqrt{\log(n)})$ by taking $B = \Theta(\sqrt{\log(n)})$.

7969 套娃

See [6273 马克思么克斯]. Sweep $k$, then each tuple $(l_0,l_1,r_0,r_1,b)$ gives $O(1)$ updates on the set of mexs. Use an erasable priority queue to do it in $O(n \log(n))$ time.

7497 葫芦娃救爷爷

Since knowing the death of 爷爷 does not help, the strategy is described simply by the next day of the rescue for a given state. Compute $f_i$, the probability of success from the end of day $i$ with alive 爷爷 but without 葫芦娃, in $O(n^2)$ time.

7498 网络图

Consider a graph on $[n]$ with $m$ edges, regarding a given $a$-$b$ path as an edge $\{a,b\}$. If we have an Eulerian circuit of this graph, following its orientation makes a perfectly balanced solution. Extending this idea, we can add some of the tree edges to make all degrees even. This can be done in post-order.

7499 么克斯马克思

Build a max Cartesian tree. A node with value $b$ corresponding a range $[l, r]$ contributes $b$ to the mex for $1 \le k \le r-l+1$. It is easy to update the mex in increasing order of $k$, in $O(n)$ time.

7500 数圈圈

$O(n^4/w)$ time ($w$ is the word size, $k \in \{3,4,5,6\}$ is regarded as constant) solution is fast enough: Fix $\lceil k/2 \rceil$ vertices on the cycle so that we can count the others independently. With some precomputation, the hardest part is to count the common neighbors vertices for $k = 5$ and it is about $\frac{n^3}{2} \frac{n}{w}$.

There exists an $O(n^3)$ time solution: Count the number of closed walks $(u_0,u_1,\ldots,u_{k-1},u_0)$ by inclusion-exclusion on the condition that $u_0,\ldots,u_{k-1}$ are distinct. Note that there is nothing to count when there is a condition $u_i = u_{i+1}$. Using symmetry, there are $12$ cases. Calculate their coefficients locally. The case with no condition is done by simple $O(n^3)$ time DP ($kn^3$), and the other cases are found to be done in $O(n^3/w)$ time.

12992 Counting Is Fun

Consider the lattice path or the prefix sums $s_i = \sum_{1\le j\le i} (-1)^{p_i}$. The condition for $i$ with $p_i = 1$ becomes $\max\{ s_0,\ldots,s_{i-1} \} \ge \min\{ s_i,\ldots,s_n \}$. We apply inclusion-exclusion. We cannot violate the condition both for $0$ and for $1$, so we assume we fix indices $i_1 < \cdots < i_k$ for violation and $p_{i_j} = 1$. Its weight is $(-1)^k f_{i_1-1} g_{i_2-i_1-1} \cdots g_{i_k-i_{k-1}-1} f_{n-i_k}$ where:

  • $f_i$: The number of sequences of $+$s and $-$s, of length $i$, such that every prefix sum is $\ge 0$.
  • $g_i$: The number of sequences of $+$s and $-$s, of length $i$, such that every prefix sum is $\in [0, h]$ where $h$ is the total.

After finding $f_i$ and $g_i$, we can compute the sum of weights with online convolution in $O(n \log(n)^2)$ time.

We see $f_i = \binom{i}{\lfloor i/2 \rfloor}$. For even $i$, $f_i = 2 f_{i-1}$, and there is a bijection between such sequences and sequences with total $0$ but without a prefix sum condition:

  • Prefix sum $\ge 0$: Let the total be $2h$, then we can remove $2h$ $+$s to get $(2h+1)$ valid bracket sequences.
  • Total $0$: Let the prefix minimum be $-h$, then we can remove $h$ $-$s and $h$ $+$s to get $(2h+1)$ valid bracket sequences.

Finding $g_i$ is the hardest part. Fixing the total $h$ to use the reflection method, we get $g_i = \sum_{h\ge0} \sum_{k\in\mathbb{Z}} ([x^{h+(2h+4)k}] - [x^{h+2+(2h+4)k}]) (x + x^{-1})^i$. We can write it as $g_i = [x^0] a(x) (x + x^{-1})^i$ for some $a(x)$, which can be found in $O(n \log(n))$ time, so it is a power projection problem. It might be easier to think of it as the transposed problem of composing $x + x^{-1}$ from the right. Divide-and-conquer runs in $O(n \log(n)^2)$ time. Using $x + x^{-1} = (2x) \circ \frac{x+1}{x-1} \circ x^2 \circ \frac{x+1}{x-1}$ we can achieve $O(n \log(n))$ time here.

As a non-solution, it is tempting to find $g_i$ via its OGF $g(z)$. For a fixed total $h$, we can decompose the sequence as $b_h(z) \cdot z \cdot b_{h-1}(z) \cdot z \cdot \cdots \cdot z \cdot b_1(z) \cdot z$, where $b_h(z)$ denotes the OGF for a sequence such that every prefix sum is in $[0, h]$ and the total is $0$. We have $b_0(z) = 1$ and $b_h(z) = \dfrac{1}{1 - z^2 b_{h-1}(z)}$ so these are rational, and we can write $b_h(z) = \dfrac{c_{h-1}(z)}{c_h(z)}$ where $c_{-1}(z) = c_0(z) = 1$ and $c_h(z) = c_{h-1}(z) - z^2 c_{h-2}(z)$. Hence we get $g(z) = \sum_{h\ge0} \dfrac{z^h}{c_h(z)}$. However I don't know how to proceed from here. I feel that the sum of $O(n/h)$ binomial coefficients arisen by the reflection method is somehow mysterious and sometimes stronger than generation functions...

10106 Homeward Glance

Let $K = \mathbb{F}_{998244353}$, $M = K^n$ and regard $M$ as a $K[T]$-module by $X \cdot v = Av$. This way $AB = BA$ is equivalent to $B\colon M \to M$ being a $K[T]$-homomorphism. By the structure theorem, we have $M \cong K[T]/(p_1) \oplus \cdots \oplus K[T]/(p_k)$ for $p_1,\ldots,p_k \in K[T]$ such that $p_k \mid \cdots \mid p_1$ (the elementary divisors). As a property of a biproduct, $B$ corresponds to a tuple $(b_{i,j})$ where $b_{i,j}\colon K[T]/(p_i) \to K[T]/(p_j)$ is a $K[T]$-homomorphism. Noting that $b_{i,j}$ is determined by $b_{i,j}(1)$, which be $f \bmod p_j$ for some $f \in K[T]$ such that $p_j \mid p_i f$. Now it is easy to see there are $|K|^{\min\{\deg(p_i), \deg(p_j)\}}$ of them. Thus the answer is $\sum_{i,j} \min\{\deg(p_i), \deg(p_j)\}$.

There exists a simple $O(n^3)$ time randomized algorithm to find $p_1, \ldots, p_k$. In summary: $p_1$ is the minimal polynomial of $A$, and is the minimal polynomial of a random vector $v \in M$ with high probability. So we check $v, Av, Av^2, \ldots$ until we find a relation to find $p_1$. Repeat for the quotient $M / \langle v, Av, Av^2, \ldots \rangle$.

13329 Counting Cactus (Hard version)

For each $r \in [0,n]$ and $S \subseteq [n]$, let $\mathcal{G}(r,S)$ be the set of subgraphs within vertex set $S$ such that it is a cactus and if $u \in [r] \cap S$ then $u$ is not an articulation point. We want to go from $r=0$ to $r=n$: $\#\mathcal{G}(0,S)$ counts simple cycles and the answer is $\#\mathcal{G}(n,[n])$. This idea is the inverse of biconnected graph counting.

Counting simple cycles for each vertex set is done by DP (for example fix the largest vertex in a cycle) in $O(2^n n^2)$ time.

A subgraph $G \in \mathcal{G}(r,S)$ with $S \ni r$ uniquely determines $k \ge 0$ disjoint sets $T_1, \ldots T_k \subseteq [n] - \{r\}$ and $G_i \in \mathcal{G}(r, T_i \cup \{r\})$ such that the edges of $G$ is the disjoint union of the edges of $G_i$. So this is exactly $\exp$ of the set power series on $[n] - \{r\}$. $O(2^n n^3)$ time overall.

13330 Dirichlet $k$-th root (Hard version)

We want to use the differential method $g' * f = k (g * f')$, but the differential of Dirichlet series is not nice: $\frac{\d}{\d s} \sum_{n\ge1} f(n) n^{-s} = \sum_{n\ge1} f(n) n^{-s} (-\log(n))$.

Our requirements are linearity and Leibniz rule $D(f*g) = (Df)*g + f*(Dg)$. We can notice that these property for $D = \frac{\d}{\d s}$ only relies on the fact that $-\log(n)$ is completely additive. So we get an idea of replacing $-\log(n)$ with $\Omega(n)$, the number of prime factors of $n$ counted with multiplicity.

Therefore with $(Df)(n) = f(n) \Omega(n)$ we have $(Dg) * f = k (g * (Df))$ and from this we can derive the recurrence. $O(n \log(n))$ time.

共 1 篇博客