Cryptanalysis of symmetric-key ciphers, e.g., linear/differential cryptanalysis, requires an adversary to know the internal structures of the target ciphers. On the other hand, deep learning-based cryptanalysis has attracted significant attention because the adversary is not assumed to have knowledge of the target ciphers except the algorithm interfaces. Such cryptanalysis in a blackbox setting is extremely strong; thus, we must design symmetric-key ciphers that are secure against deep learning-based cryptanalysis. However, almost previous attacks do not clarify what features or internal structures affect success probabilities. Although Benamira et al. (Eurocrypt 2021) and Chen et al. (ePrint 2021) analyzed Gohr’s results (CRYPTO 2019), they did not find any deep learning specific characteristic where it affects the success probabilities of deep learning-based attacks but does not affect those of linear/differential cryptanalysis. Therefore, it is difficult to employ the results of such cryptanalysis to design deep learning- resistant symmetric-key ciphers. In this paper, we focus on two toy SPN block ciphers (small PRESENT and small AES) and one toy Feistel block cipher (small TWINE) and propose deep learning-based output prediction attacks. Due to its small internal structures, we can construct deep learning models by employing the maximum number of plaintext/ciphertext pairs, and we can precisely calculate the rounds in which full diffusion occurs. Specifically for the SPN block ciphers, we demonstrate the following: (1) our attacks work against a similar number of rounds attacked by linear/differential cryptanalysis, (2) our attacks realize output predictions (precisely ciphertext prediction and plaintext recovery) that are much stronger than distinguishing attacks, and (3) swapping the component order or replacement components affects the success probabilities of the proposed attacks. It is particularly worth noting that this is a deep learning specific characteristic because swapping/replacement does not affect the success probabilities of linear/differential cryptanalysis. We also confirm whether the proposed attacks also work on the Feistel block cipher. We expect that our results will be an important stepping stone in the design of deep learning-resistant symmetric key ciphers.