Simion and Kelp develop the obligation-based account of trustworthiness as a compelling general account of trustworthiness and then apply this account to various instances of AI. By doing so, they explain in what way any AI can be considered trustworthy, as per the general account. Simion and Kelp identify that any account of trustworthiness that relies on assumptions of agency that are too anthropocentric, such as that being trustworthy, must involve goodwill. I argue that goodwill is a necessary condition for being trustworthy and further suggest a network account of trustworthy AI, which retains the goodwill as the essential requirement for the concept of trustworthiness and meanwhile predicts that current AI can be trustworthy. The alternative account suggests that the focus of trustworthy AI is not merely AI technology, but the whole network of AI involving AI technology, AI designers, AI companies, and other social and legal institutions. A trustworthy AI requires that AI technology is reliable and that other involved agents are trustworthy.